PE Volume 121 Issue 3 Archives https://www.power-eng.com/tag/pe-volume-121-issue-3/ The Latest in Power Generation News Tue, 31 Aug 2021 10:52:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.power-eng.com/wp-content/uploads/2021/03/cropped-CEPE-0103_512x512_PE-140x140.png PE Volume 121 Issue 3 Archives https://www.power-eng.com/tag/pe-volume-121-issue-3/ 32 32 Carbon Emissions from Generation Fall Below Transportation Emissions https://www.power-eng.com/emissions/carbon-emissions-from-generation-fall-below-transportation-emissions/ Fri, 10 Mar 2017 00:46:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/departments/generating-buzz/carbon-emissions-from-generation-fall-below-transportation-emissions By Editors of Power Engineering

A new report from the U.S. Energy Information Administration indicates carbon dioxide emissions for power generation have fallen below emissions from the transportation sector for the first time since the late 1970s.

Electric power CO2 emissions fell to 1,803 million metric tons from October 2015 through September 2016, continuing a 10-year downward trend.

Transportation CO2 emissions rose slightly to 1,893 million metric tons during the same time period.

Power emissions mostly come from coal and natural gas-fired electric generators, with coal generation running from 206 to 229 pounds per million British thermal units, depending on the type of coal used. Natural gas emits an average of 117 pounds per million British thermal units, as it requires less fuel to generate electricity.

As such, coal accounted for 61 percent of all electrical generation emissions, even though coal was used to generate 31 percent of power. Natural gas generated 31 percent of generation emissions.

As for the transportation sector, 60 percent of those emissions came from gasoline, with 23 percent from distillate fuel oil and 12 percent from jet fuel.

]]>
CCR Impound Regulations Spur Questions Over Dust Emissions and Wastewater Control https://www.power-eng.com/emissions/ccr-impound-regulations-spur-questions-over-dust-emissions-and-wastewater-control/ Fri, 10 Mar 2017 00:44:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/features/ccr-impound-regulations-spur-questions-over-dust-emissions-and-wastewater-control By Derek Schussele

Atomized mist is one of the only technologies that can deliver effective control of both surface dust and airborne particles.

In recent years, incidents of leakage or breaching of surface impoundments have inspired changes in coal combustion residual (CCRs) storage regulations with regards to containment of settling ponds used for the storage of substances including fly ash, bottom ash, boiler slag and flue gas desulfurization materials.

The Environmental Protection Agency (EPA) Rule “Hazardous And Solid Waste Management System; Disposal Of Coal Combustion Residuals From Electric Utilities” appears to be aimed directly at coal-burning generators. Instead of settling ponds, the EPA is now requiring CCR-producing companies to transition to dry storage, with very stringent rules regarding location and treatment.

CCR producers transitioning to new dry storage strategies are discovering that compliance with one regulation can have them bumping up against air quality and wastewater standards. Monitored by extremely sensitive technology, testing for airborne particulates and run-off has shown that timeworn dust suppression methods such as industrial sprinklers are no longer sufficient to maintain compliance with the current federal, state and local regulation of fugitive dust and wastewater.

CCR Pile Storage and Dust Mitigation Requirements – 1

For outdoor storage, the EPA Final Rule requires an impervious base with both run-on and run-off control leading to a lined settling pond.

CCR Regulat ion

Authors of the EPA final rule now mandate operators of sites to have, “‘Cradle-to-grave’ management, subject to requirements for composite liners, groundwater monitoring, structural stability standards, corrective actions, closure/post-closure care and financial assurance.”

The final rule further requires that owners and operators create a CCR Fugitive Dust Control Plan that gives clear instructions as to how they plan to mitigate fugitive dust emissions from their locations. EPA examples of appropriate control measures include operational changes such as reducing fall distances at material drop points, covering trucks, enforcing reduced speed limits, and reducing or halting operations during high wind events. Other measures could involve structural changes to the facility, paving and sweeping roads or locating the CCR inside of an enclosure or partial enclosure.

For outdoor storage, the EPA Final Rule restricts the placement of CCR storage piles to an impervious base with both run-on and run-off control leading to a lined settling pond. The agency suggests using wind barriers, compaction and/or vegetative shields, applying a daily cover and operating a water spray or fogging system.

Slipstream Effect – 2

Atomized mist suppresses dust more effectively than sprinklers and spray bars, creating smaller droplets that avoid the slipstream effect.

 

CCR Landfill Restricted Locations – 3

 

Preventing run-off is a critical element of dust control.

Fugitive Dust

Dust particles 200 microns or smaller are able to linger in the air. At around 100 μm these particles are considered inhalable, able to irritate the nose and throat.

Wind naturally comes to mind as a main cause of fugitive dust, but it’s only part of the problem. In most operations, the greatest amount of fugitive dust is caused by disruption from loading, offloading, conveying and transport of CCRs. For this reason, attempts to control dust only via surface suppression are largely ineffective. Surface suppression from industrial sprinklers create droplets approximately 200-10,000 μm in size. Large droplets are unsuccessful against airborne dust particles, due to a phenomenon known as the “slipstream effect.”

A slipstream is created when a solid mass moves swiftly through the air. Like air moving around an airplane wing and keeping the craft aloft, a slipstream also travels around a large falling water droplet. Smaller dust particles can get caught in this slipstream and be directed away from the droplet, remaining airborne. The greatest chance for a collision between droplets and dust particles occurs when the two are about the same size.

Developed in the last decade, atomized mist technology avoids the slipstream effect, producing millions of tiny droplets that are roughly 50-200 μm in diameter. Small enough to travel on air currents and producing virtually no slipstream, the droplets collide with particles and use their combined mass to drag them to the ground.

The largest DustBoss design features a specialized barrel with a powerful 60 horsepower (HP) industrial fan on one side and a misting ring on the other. A 10 HP (7.5 kW) booster pump sends pressurized water through the circular manifold, which is fitted with atomizing nozzles.

Wastewater Run-off

The EPA’s Final Rule restricts the placement of CCR storage piles away from an aquifer, wetland, seismic impact zone, fault area, or unstable soil. This makes the volume of run-off from dust suppression technology an even more important consideration. Generally using between 165 and 500 GPM, industrial sprinklers can fill two to three Olympic-sized swimming pools every week of operation. Atomized mist is able to deliver a fraction of the water volume.

The lower water usage of an atomized mist system helps prevent over-saturation and run-off, drastically reducing wastewater. This improved water control allows operators to better maintain compliance with local, state and federal regulations and restrictions.

Author

Derek Schussele is a dust management specialist at Dust Control Technology

 
]]>
A Cheaper HRSG with Advanced Gas Turbines https://www.power-eng.com/gas/a-cheaper-hrsg-with-advanced-gas-turbines/ Fri, 10 Mar 2017 00:43:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/features/a-cheaper-hrsg-with-advanced-gas-turbines When and How It Can Make Sense

By S. Can Gà¼len, Ilya Yarinovsky, and Dave Ugolini, Bechtel Infrastructure & Power Inc.

Present state-of-the-art in gas turbine combined cycle (GTCC) design is three-pressure, reheat (3P-RHT) steam bottoming cycle with steam generation at three different pressure levels. The goal is to maximize total steam generation and steam turbine generator power output for a given gas turbine exhaust energy and, thus, to maximize combined cycle efficiency. There are three “knobs” available to the designer to dial in that maximum – all dictated by fundamental thermodynamic considerations:

  • Steam mass flow rate
  • Steam availability (exergy)
  • Heat rejection temperature (steam condenser pressure)

All three knobs have a significant impact on bottoming cycle equipment size (footprint and weight) and cost via following mechanisms:

  • HRSG (Heat Recovery Steam Generator) heat transfer surface area
  • Condenser and cooling tower heat transfer surface area
  • Pipe, tube and steam turbine/valve casing/shell materials (high grade stainless and/or alloy steels)
  • Steam turbine exhaust annulus area (last stage bucket (LSB) length)

Furthermore, constant pressure-temperature boiling characteristic of the cycle working fluid, H2O, necessitates steam generation at multiple pressure levels to minimize the heat transfer irreversibility in the HRSG. Presently, three pressure levels (high, intermediate and low, HP, IP and LP, respectively) is the industry standard. State-of-the-art design parameters are summarized in Table 1, which has two columns: moderate and aggressive. Admittedly, these qualitative monikers are somewhat arbitrary (e.g., why 125 barg for the “moderate” HP steam pressure and not, say, 115 barg?). Nevertheless, there is no purely physics-based, clear-cut delineation that one can use as a yardstick. This difficulty can be traced back to the fact that there is no fixed product family classification for the bottoming cycles analogous to the “class hierarchy” for heavy-duty industrial gas turbines (i.e., the “topping cycle” of the combined cycle). Thus, some amount of fuzziness in labeling bottoming steam cycle designs is unavoidable.

The term “maximizing” used in reference to the ST power output has two connotations: “make as much as possible” and “make the best use of [what?]”. The [what?] in question is, of course, capital cost of the resulting system. Otherwise, simply building equipment as large as possible using the most exotic materials with no regard to cost, footprint and ease of construction would lead to higher and higher performances (up to a certain limit, of course, set by the second law of thermodynamics). This, in fact, is pretty much the approach taken by the OEMs in advertising “world record” combined cycle efficiency ratings as well as achieving such “world record” performances in showcase power plants with highly advantageous site characteristics (e.g., proximity to a year-round available cooling water source such as a river or ocean). Unfortunately, this is not exactly a widely reproducible and/or sensible business approach to the problem at hand.

In fact, each bottoming cycle is a tailor-made system specific to the particular project strongly dependent on owner/developer’s financial criteria, site conditions and prevailing (or projected) economic climate. There are some discontinuities (or break-points) introduced mainly by steam turbine OEMs’ product line portfolios (essentially casing/shell configuration and LSB size) and, to some extent, other equipment vendors’ in-house design practices (e.g., HRSG “box” sizes, cooling tower cell/fan sizes, etc.) but, otherwise, this is a fairly continuous design spectrum.

In this article, it is postulated that, in the existing power generation climate and economic environment, aggressive bottoming cycle designs are not warranted. Justification for this postulate is provided via a deterministic approach (i.e., LCOE calculations) by (i) considering a two-pressure reheat (2P-RHT) design and (ii) evaluating the feasibility of advanced steam cycle parameters (steam pressure and temperature). In lieu of a rigorous probabilistic approach (well beyond the scope of the paper), a sensitivity analysis is done to show that the conclusions are pretty robust to reasonable fluctuation in key parameters. The article is a condensed version of the full paper presented in PGI 2016 conference in Orlando, FL, in December 2016.

REALITY TODAY

There are four important parameters in the LCOE equation and the “tug-of-war” between them constitutes the key to optimization (i.e., LCOE minimization):

  • The tug-of-war between specific capital cost, k in $/kW, and (i) plant load factor, λ, and (ii) annual operating hours, H
  • The tug-of-war between fuel price, f, and thermal efficiency, h0

Budgetary Price Data -1

In particular, investing a lot of capital into a power plant (i.e., high k) in order to buy as much efficiency as possible (i.e., high h0) can only be justified if

  • the expected/projected electric energy generation (kWh or MWh) is commensurately large, i.e.,

– High plant load factor (i.e., more kW or MW) and/or

– High annual operating hours (i.e., high capacity factor)

  • the fuel price f is high

Each parameter is looked at separately below.

What amount of extra capital investment into the bottoming cycle is justified by the improved combined cycle efficiency via increased steam turbine generator (STG) output? This is a fundamental question, whose answer is dictated by the basic principles of gas turbine (GT) combined cycle power plant thermodynamics and economics. This can be easily verified by data extracted from the budgetary price numbers listed in Gas Turbine World 2014-15 Handbook for simple and combined cycle GT power plants (Figure 1). For a large heavy-duty gas turbine generator (GTG) of a 300+ MWe size, say, each kilowatt from the bottoming cycle costs more than six times that from the topping cycle (see Figure 1). Note that budgetary prices reflect a “bare bones” EPC turnkey scope assuming “overnight construction”. Transportation, project-specific options, indirect costs such as contingencies, owner’s costs and interest during construction are not included. These items can typically add 30-40 percent to the budgetary price.

EIA Capacity Factor Data for Natural Gas-Fired Combined Cycles – 2

U.S. DOE Natural Gas Price Forecasts and Actual Prices – 3

In order to put the situation depicted in Figure 1 into a plant-level quantitative perspective, assume a 500 MWe GTCC power plant at 60 percent net efficiency (5,687 Btu/kWh). Additional 5,000 kW bottoming cycle output costs about $7.5 million in budgetary price and “buys” 0.6 percentage points of efficiency or 57 Btu/kWh heat rate. In other words, assuming $675/kW for the CC budgetary price per GTW 2014-15 Handbook, each one Btu/kWh reduction in net heat rate comes at a cost of $75,000. Is this a good trade-off? In order to answer this question, we need to point out several key factors based on available industry data.

Annual operating hours are typically expressed in terms of the capacity factor. U.S. Energy Information Administration, Electric Power Monthly, Table 6.7a provides monthly capacity factors for 16 different fossil and non-fossil fuel and technology combinations. The data for natural gas-fired combined cycle power plants is summarized in Figure 2. Prior to 2010, these plants were run at a very low capacity factor (CF) but the situation changed quite dramatically in recent years. Clearly, shale gas “boom” and ensuing low natural gas prices played a significant role in this. Even so, it is hard to envision that the annual average CF for natural gas-fired CC plants will be much higher than 55-60 percent in the foreseeable future (especially with increasing renewable resource penetration). Translation from CF to annual hours, H, is subject to uncertainty since the annual average load factor is not known and there is significant HRSG supplementary firing and GT inlet conditioning to boost output, especially during summer. A wide variation from plant to plant is to be expected (more on this later). For example, a daily-cycled CC power plant with weekend shutdown and two weeks of scheduled maintenance, will run only 50x5x16 = 4,000 hours per year, which corresponds to a CF of 0.75×4,000/8,760 = 35 percent (load factor of 0.75, no supplementary firing or GT inlet conditioning). However, capacity factors in Figure 2 are significantly higher, which is an indication of significantly higher load factor (e.g., more hours at full load), power augmentation (via supplementary firing and/or GT inlet conditioning) or a combination thereof.

Comparison of Gaseous Fuel Prices in U.S., Europe and Japan – 4

GTCC Efficiency Evolution 1985-2015 – 5

Long term natural gas (NG) price forecasts are difficult to make as illustrated by the chart in Figure 3, which superimposes outlooks by the U.S. DOE (consistently predicting increasing scarcity and rising prices) and The National Petroleum Council (NPC), with the latter comprising pessimistic (reactive path) and optimistic (balanced future) scenarios.

Except for the 2003-2008 period, when prices spiked above historical levels due to a tight market caused by several factors, i.e., weak supply and growth in demand for peaking power in particular, long term expectation of annual ~2 percent growth in NG prices pretty much held (going back to the Carter administration era and the Alaska Natural Gas Transportation System (ANGTS) project). Right after that peak price period, development of new sources of shale gas, driven by hydraulic fracturing and horizontal drilling technologies, has more than compensated for the decline in conventional supply, and has led to major increases in reserves of US natural gas. The so-called shale gas boom, although not a guarantee by any means, is expected to prevent non-seasonal, years-long price spikes and exorbitant long-term growth rates in the USA. The situation is somewhat different in Europe and Japan (see Figure 4).

What about the thermal efficiency? Historical GTCC efficiencies (rating, i.e., “advertising”, numbers as well as selected “field-clocked” values) are depicted in Figure 5. Also included in the same graph are the average efficiency of top twenty (in terms of heat rate) gas fired GTCC plants in the U.S. in 2004-2015 – including duct-fired units – which squeaked past 55 percent (LHV basis) only in the last couple years. (This was probably driven by the commissioning of the more advanced FA/H class units and increasing load factor – it is difficult to glean from the data, which includes only generation and fuel consumption numbers.) As illustrated by the min-max range, a select few registered as high as 57 percent whereas most plants (remember: these are among the twenty best in terms of performance – imagine the rest!) were clocked at only about 53 percent!

Coming back to the question posed at the beginning (i.e., additional 5 MW bottoming cycle output at $7.5 million extra cost – a good trade-off or not?), using the LCOE formula and assumptions, the answer is “it depends”. For a GTCC plant with cyclic duty, the value of the proposed improvement of extra 5 MW output is about $6 million for a fuel price of $4/MMBtu (HHV) or about $50,000 per each one Btu/kWh reduction in net heat rate. The fuel price to make it worthwhile at $7.5 million cost adder is about $6.50/MMBtu (HHV). Alternatively, at $4 fuel, the plant should run around 5,800 hours per year at base load duty (load factor of 0.9) to justify the $7.5 million cost adder.

THERMOECONOMICS

Today’s state-of-the-art in the bottoming cycle technology makes incremental improvements very costly to the point that, at least in the USA, at prevailing natural gas prices, even a third pressure level in the HRSG becomes a “luxury”. (This is explained in more detail in the PGI 2016 conference paper.) In order to take a closer look at this premise, a detailed performance-cost trade-off analysis is undertaken.

Three HRSG OEMs are provided heat and mass balance data roughly corresponding to three variants with (i) same natural gas -fired GT exhaust gas conditions (J class, ~1,500 lb/sec and 1,175°F) at ISO base load and (ii) same HP throttle conditions (nominal 1,800 psig and 1,050°F):

  1. Base (Conventional) Case: 3P-RHT with normal HRSG pinch deltas
  2. “Cheap” Case A: 3P-RHT with large HP evaporator pinch delta
  3. “Cheap” Case B: 2P-RHT with normal HRSG pinch deltas

HRSG equipment price differential of the “cheap” designs from the OEMs are summarized in Table 2. In addition, quantities and man-hour savings resulting from the elimination of the IP section of a similar HRSG unit by one of the three OEMs (in terms of size, configuration and steam production), which was erected at a recent U.S. combined cycle project. Deleted commodities and associated labor included large-bore pipe, valves, supports, and welds, small-bore piping, IP steam drum, pressure relief valves (flanged), IP relief valve silencers and their support steel and platforms, instruments, HRSG hydro testing, chemical cleaning and pipe installation and IP two-row wide box deletion. The resulting saving was equivalent to slightly above 10,000 man-hours.

It is quite clear that nearly $1 million saving in equipment price is achievable via a cheaper HRSG. This could be obtained via either the removal of the IP section or a cheaper HP section (i.e., less HP steam production). Including the savings in erection materials and labor, the former is the preferable route with around $1.7 million total saving per HRSG (i.e., averages for Case B, ~$1 million in price plus ~$750K for construction in Table 2).

Per OEM feedback, HRSG price delta between 2,400 and 1,800 psig cycle is around $500K based on tube, drum and pipe thicknesses with consideration for valve classification.

Two cases are evaluated in order to estimate capital investment savings in a 1x1x1 single-shaft GTCC similar to that proposed for an actual CC project. The base case is set as follows:

  • J Class gas turbine (natural gas-fired, with inlet evaporative cooler)
  • Design ambient conditions 90°F, 40 percent relative humidity
  • 3P-RHT unfired steam cycle: 2,415 psia HP throttle with 1,050°F for HP and hot reheat steam admission
  • Air-cooled condenser at 3.5 inches of mercury
  • HRSG evaporator pinch deltas 15 degrees F

The second “cheap” case is based on a 2P-RHT steam cycle with 1,815 psia cycle and 25 degrees F HP evaporator pinch delta. LP admission pressure is the same as in the base 3P-RHT case. In this case, GT fuel gas performance heating (to 410°F, same as in the base case) utilizes hot feed water from a dedicated economizer section. The GT is fired 8 degrees F higher to maintain the same GTCC net output as the base case. (The implicit assumption here is that the GT in question is one of the latest H/J class machines with the most advanced technology – superalloys, coatings, cooling schemes, etc. – allowing the OEM limited “wiggle room” about the nominal TIT of 1,600°C. For a GT quoted by the OEM at its extreme capability, of course, this is not a feasible option.) Cycle performances are calculated using Thermoflow’s GT PRO software. Total overnight cost is calculated using the PEACE add-in with calibration per above. The results are summarized in the Tables 3-5 below.

Gas turbine and generator price is from PEACE with some adjustment per GTW 2014-15 Handbook budgetary price data. Steam turbine price is also from PEACE with calibration based on in-house data. HRSG equipment price difference in Table 3 can be broken down as follows:

  • $500K for 2,400 to 1,800 psig steam cycle
  • $525K for HP evaporator pinch increase by 10 degrees
  • $1 million for IP section elimination

The “Mechanical” cost bucket in Table 4 includes on-site transportation, rigging, equipment erection assembly plus piping (materials plus labor). The difference of about $2.4 million between the “Base” and “Cheap” versions can be broken down as follows:

  • $750K for IP section elimination (see Table 2)
  • $1 million for 2,400 to 1,800 psig steam cycle
  • $600K for HP evaporator pinch increase by 10 degrees

The latter two are estimated by the PEACE program and mainly driven by smaller and lighter HRSG.

They have not been verified by detailed construction material take-off and labor estimates.

The “Civil” cost bucket in Table 4 includes site work, excavation and backfill, and concrete foundations (including rebar). The difference of about $2.4 million between the “Base” and “Cheap” versions can be broken down as follows:

  • $900K for 2,400 to 1,800 psig steam cycle
  • $400K for HP evaporator pinch increase by 10 degrees
  • $900K for IP section elimination

All three are estimated by the PEACE program and mainly driven by reinforced concrete foundation material and labor for the smaller and lighter HRSG.

They have not been verified by detailed construction material take-off and labor estimates.

Clearly, even at $5 natural gas, which is on the expensive side for the U.S. market in the foreseeable future, investing into the bottoming cycle for a few Btus of heat rate clearly does not pay off. At $5 fuel, for LCOE parity between base and “cheap” cycles

  • TOC saving of ~$2.6 million is sufficient for cyclic operation whereas
  • TOC saving of ~$4.1 million is required for baseload operation

At ~$11 million TOC saving, for LCOE parity between base and “cheap” cycles

  • Fuel price must exceed $30/MMBtu for cyclic operation whereas
  • Fuel price must be nearly ~$16/MMBtu for baseload operation

It is amply clear that, unless natural gas prices are exorbitantly high and/or the power plant in question is intended to assume a truly baseload duty, there is no case to be made for an expensive bottoming cycle. (Note that even if PEACE “Mechanical” and “Civil” estimates are off by 50 percent, the TOC saving is $8.7 million and contains enough margin to support this conclusion.)

One may justifiably object to the comparison in Table 5 by pointing out the 8 degrees F higher firing temperature for the “cheap” case.

Here’s the rationale: The reason for the higher firing temperature is to equalize the net output of the two cases.

Otherwise, the cheaper 2P-RHT design would have 3.7 MWe lower output (with about the same cost delta and slightly higher LCOE, i.e., $96.92 and $67.87 per MWh for cyclic and baseload duties, respectively, but still lower than those for the more expensive 3P-RHT variant).

The chain of thought goes as follows:

  • By going from the cheaper bottoming cycle to the expensive one, extra 3.7 MWe output is “bought” by paying $11 million.
  • This is equivalent to 45 Btu/kWh better heat rate – at exact same fuel consumption!
  • The question to ask is this: Which one is cheaper?

– Buying 3.7 MWe output for extra $11 million, or

– Buying 3.7 MWe output by extra fuel consumption

The answer, via LCOE analysis, turns out to be the latter. (Note that heat rate improvement more than compensates for marginally higher fuel burning and the heat rate delta improves to 37 Btu/kWh.)

Another reasonable objection would be “what about the expensive bottoming cycle and the 8 degrees F higher firing temperature?.” To answer that, consider the performance and LCOE comparison of four possible design permutations in Table 6.

On a truly “apples-to-apples” basis, obviously, the performance delta between the “expensive” and “cheap” bottoming cycles is about 3.7 MWe and 45 Btu/kWh of heat rate. At $5 fuel, the LCOE comparison favors the latter.

On a “gross margin” basis (the difference between the market price of energy and the variable generation costs), it is true that the “Base” variant has a slight advantage.

However, it is not significant enough to severely impact the eventual place of the particular GTCC configuration selection (from owner/developer perspective) in the “merit/economic dispatch order” in a large ISO. Thus, the difference between the “levelized revenue requirement” quantified by the LCOE and the forecasted gross margin is the determinant in the selection of one variant over the other. In this case, the “Cheap” variant with the smaller difference should be the preferred configuration.

In general, as long as marginal improvements within a small band, i.e., ±1 percent or less on net output and heat rate, erring on the side of less capex is probably a good bet.

Beyond that, however, commercial considerations, which cannot be encapsulated in a simple metric like LCOE, can take precedence.

In addition, the uncertainty aspect may become more critical to the extent that erring on the side of better performance (a more robust number than fuel prices over the next twenty years) may be the more prudent course of action.

CONCLUSION

Using fundamental thermodynamic and economic arguments, it is shown that performance improvement via larger HRSG is an uneconomic choice – unless justified by high fuel price and/or plant capacity factor (i.e., a base-loaded unit).

Conceptual analysis predictions are verified by OEM supplied prices and detailed construction estimates.

The output delta is marginal enough that it can be achieved via a small increase in gas turbine firing temperature.

The capital cost saving is sufficiently large to more than compensate for the increase in heat rate so that, under most operating scenarios, the life-cycle LCOE favors the “cheaper” bottoming cycle – albeit with caveats enumerated in the preceding paragraphs.

]]>
Using Advanced Analytics and Controls https://www.power-eng.com/om/using-advanced-analytics-and-controls/ Fri, 10 Mar 2017 00:42:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/features/using-advanced-analytics-and-controls Driving Economic Value in a Complex Operating Environment

By William J. Howard, Whitney Satin, Rachel Farr and John Plenge, GE Power Digital Solutions & Gas Power Systems USA

A number of industries have been transformed by the wave of digital innovations, big data, analytics and computing; more recently, the power industry has begun to selectively apply these digital technologies to drive better economic outcomes for existing and greenfield power plants. These disruptive technologies are arriving at a time when the power industry is encountering dramatic market dynamics resulting from changes in fuel prices, increases in renewables coming on line, and changes in the regulatory environment.

The challenges of managing power generation plants have become more complex-complicated system interactions, more co-optimization demands, and more operating profile flexibility. This convergence offers power producers an opportunity to embrace technology to re-position the competitiveness of their plant operations.

This article explains how today’s power operators can use a modern ecosystem to provide on-going operational productivity-protecting against downside risk while constantly pursuing upside opportunities and increasing economic value while reducing total cost. This article will cover a brief history of big data and applied analytics usage in power plants today, as well as the dynamics in the industry that have created new operational complexities. We present a maturity model that provides a roadmap for end users to advance through different stages of applied analytics to drive incremental and sustained productivity. We then discuss the obstacles to implementing this maturity model while specifically recommending how to progress to prescriptive analytics within the Industrial Internet architecture, moving from business applications in the cloud down to the controls layer.

Industry dynamics

In recent years, a variety of external forces have significantly impacted the way power generation companies run their assets and meet business objectives. An increase in the supply of renewables, combined with varying gas prices and a constantly changing regulatory environment, has increased the need for reliable, flexible power across generation assets. Notably, as the appetite for renewable energies increases, traditional baseload units must shift to a more cyclic operating profile. This, in turn, introduces new patterns of wear and tear that ultimately drives new maintenance needs and behaviors and impacts reliability. The complexity of this dynamic is compounded by the fact that operators must balance pressures to reduce maintenance budgets, as shorter run times have resulted in substantially lower profits overall.

Additionally, the workforce managing today’s power plants is changing, as nearly 30 percent of today’s utility workforce is expected to retire in the next 5 years. The Bureau of Labor Statistics projects that the employment of power plant operators in nonnuclear power plants will decline by 11 percent from 2012 to 2022, a dynamic similar globally, which creates an environment where a smaller number of less experienced operators are tasked with operating power plants reliably. In general, the workforce is being replaced by a younger, more technologically-savvy employee who expects cutting-edge technologies similar to those found in the consumer space, which can make it hard for power companies with traditional legacy systems to attract good talent.

Industry Response

Power producers have turned to data and analytics as a way to manage these dynamics. The use of data is not new to the industry, as operators have collected and stored machine sensor data in historians for years now. Many countries face regulations that require customers to keep a minimum amount of data for a set period of time. In fact, across the power generation space, data storage is expected to have a 30 percent compound annual growth rate from 2014 to 2020. Typically this data serves as the basis for post-issue resolution-when a machine breaks, a technician will leverage the data in determining a root cause and associated corrective action. A subset of customers have gone a step further, moving beyond basic data collection to creating simple dashboards that can highlight trends and deviations from the norm. In addition, some operators use condition-based monitoring (CBM) systems-i.e., vibration detection systems-as a means of detecting equipment issues before catastrophic failures occur.

More and more operators are considering ways to leverage data and analytics at greater scale; this includes collecting a larger number of data points, centralizing the data repository, purchasing analytic packages or additional CBM systems. More advanced operators are going a step further, looking to transform their organizations by building out centralized monitoring and diagnostic centers and the associated engineering teams. Ultimately, use cases around data and analytics can be described by a simple maturity model (Figure 1):

Maturity Model

As the application of analytics moves from descriptive to prescriptive, the nature of how to apply data and analytics changes from merely collecting information and doing basic trending to instead focusing on how to leverage data and analytics for true optimization. We see a shift in leveraging data to protect plants from financial downside (equipment failure leading to unavailability and expensive repairs) to enabling an upside (purposefully timed maintenance that balances operational risk and reward). As the analytics provide more valuable insights on the operational risks and opportunities, they need to be connected to both the people who make operational decisions and to the advanced controls that can adapt and maneuver the machines towards the desired outcomes.

Limitations Across Early Stages of the Maturity Model

Though early stage data collection methodologies offer some benefits to plant operators, they still have significant process inefficiencies with little impact on key performance indicators (KPIs). For example, using trend charts for root cause analysis after the fact may reduce resolution time, but it does not contribute to improved reliability of the plant-a metric valued nearly universally. It also results in a reactive approach to maintenance, meaning operators engage in significant “firefighting” as issues emerge, which often incurs high maintenance costs and reduces availability. Periodic review of dashboards can help identify issues early but introduces its own set of complications, as interpretation is often subjective and time-consuming, particularly when coupled with a field workforce that is increasingly inexperienced. Potentially severe issues may go undetected, or alternatively operators can waste time chasing noncritical issues.

Over time operators have increasingly relied on CBM systems, and while these are effective ways to detect failures, they often do so within equipment siloes and lack capabilities to enable cross-business collaboration for speedier and more accurate issue resolution. This approach also tends to reinforce a need for system specialization and extensive training at a time when workforces are getting leaner.

Challenges Moving Up the Maturity Model

More advanced operators will engage their IT departments as a way to help synthesize data across different point solutions. This enables them to start moving away from merely protecting the downside to thinking through ways to optimize the upside. By engaging with IT, these operators hope to create an ecosystem-a connection of data, networks, computers, CBM systems, and the people who use them.

Despite the best of intentions, most of these efforts fall short. Usually the connections built are simple in nature-data going one direction and aggregated in a way that makes it difficult to ultimately view the underlying parameters. For example, most data integration software can highlight the total number of events, but an operator will be unable to drill into the details that triggered the alarm in the first place, requiring a manual piecing together of the data from disparate systems to see the full picture. This adds significant time to the resolution process, complicated by the fact that most people in the organization do not have access to all the systems, which makes it difficult or impossible to connect the dots. In addition, the effort to connect systems together introduces a significant cost and level of complication related to both systems integration and data storage.

Where analytical insights point to an opportunity to adapt a machine’s operating profile, significant delays to implement these improvements can occur unless the systems are connected through compatible software models. This inability to take analytical insights and make control changes that take full advantage of the machine’s operating envelope presents yet another obstacle to achieving operational excellence.

A New Paradigm: The Industrial Ecosystem

To continue to create new levels of business value in today’s environment, operators and critical power equipment suppliers must together think and perform in a new way-incorporating people and technology seamlessly into their business processes. Doing this requires a new infrastructure, one that is purposefully designed to enable continuous improvement while leveraging data from multiple sources, enabling more effective decision-making throughout the organization. The new digital ecosystem is designed to enable faster iterative learnings and optimization actions, an enhanced user experience across the operational team, and a new class of applications focused on targeted outcomes to create new business value.

At GE, we’ve built Predix, a new industrial-grade ecosystem to accomplish the above directives, leveraging the best-in-breed of cloud technology from the consumer space and contextualizing it for the realities of an industrial setting. This helps equip plants with the digital infrastructure needed to more fully realize the benefits of advanced software models and analytics.

Cloud Technology. Industrial systems are typically built with a rigid structure, becoming obsolete as advances in technologies and needs change. Building a modern ecosystem should take this into account, allowing the system to adapt and evolve over time. We see this in the consumer space, where updates are constantly pushed to different platforms and devices and there is a general race to adopt the newest technologies quickly. This is possible because most consumer applications leverage cloud technology. From an infrastructure standpoint, cloud provides a lower total cost of ownership, elastic storage, and greater compute power. It also enables continuous seamless updates and the ability to access applications from multiple devices regardless of location. “Platform-as-a-Service” (PaaS), or a platform built on the cloud that leverages a common services architecture, is an increasingly popular concept. It provides the ability to leverage common services (i.e., visualization, authentication, etc.) to create new applications and allows for faster development, enabling developers to focus on unique value-added capabilities rather than reinventing basic functionalities.

Industrial-Specific Components.While several cloud platforms exist today, none has successfully combined key elements needed for developers to build applications geared towards the industrial space. The industrial environment introduces different-in-kind infrastructure needs: data services, asset models, and analytic services.

In most industrial settings, the thousands of sensors on hundreds of components and pieces of equipment generate data at sub-second rates, meaning that any infrastructure must have the ability to handle massive amounts of time series data. Beyond time series data, the infrastructure should enable the collection of and quick access to a variety of data sources and data types, including work order history, weather data, drawings, etc.

Second, an industrial setting needs an asset model that can integrate this data. A typical relational database model cannot efficiently handle the complexity of relationships between the different data sources, requiring a graph database model instead. A graph database allows for greater flexibility as far as analyzing the interconnections between various data points, a typical feature of today’s complex business models.

Finally, the industrial setting requires a system that can take advantage of the new asset models and data services, allowing for a new set of advanced analytics. Analytic services provide operators with the ability to test and deploy analytics rapidly across a fleet of assets, and then monitor, improve and update the analytics as needed. This iterative dynamic is a key component that enables the development and implementation of prescriptive analytics.

Plant-Level Digital Infrastructure. At an individual plant level, the local controls architecture can dramatically increase the value of advanced software models and analytics. By connecting the insights of advanced analytics to the ways in which a machine can be optimally adapted to meet a new mission objective, the controls architecture creates flexibility in the operating envelope of power plants while configuring the system throughout the lifecycle so that plants remain relevant given changing industry dynamics. This relies on both physics-based domain knowledge as well as terabytes of operational and test data, enabling plants to migrate from traditional schedule-based control schemes to system integrated model-based controls.

This advanced software modeling gives assets the flexibility to operate in a broader space, bounded by critical KPIs (e.g., output, emissions, life, ramp rate) as defined by the specific power producer at a specific point in time. In the case of complex co-optimization problems, dynamics such as trading life, heat rate and emissions can be studied through predictive simulations and then be directly implemented in the adaptive control software structure. Bridging cloud analytics, decision support, and adaptive controls gives operators the ability to consume big data from the plant and fleet to drive iterative improvements that can be quickly applied to provide better outcomes for both plant systems and individual machines. Ultimately, the greatest benefit stems from controls and data analytics maturity increasing in tandem-specifically, to the point at which prescriptive analytics enable an asset to respond dynamically to allow power producers to reach critical KPIs on a given day, week, month or year.

Putting the Ecosystem into Action

With the industrial infrastructure in place, operators now have the capabilities to move up the data analytic and control maturity curve to execute on the above outlined benefits of the new ecosystem: 1) iterate quickly on advanced analytics; 2) utilize an expanded operational envelope; 3) develop and make use of intuitive applications; and 4) incorporate (1), (2) and (3) into business processes and operational decisions to create additive sources of value.

Advanced Analytics. Analytics have a natural lifecycle and require multiple iterations to perfect. For example, the quality of the analytic improves by increasing the probability of detection while decreasing the number of false positives. This is made exponentially easier when analytics are written by engineers with deep domain knowledge. Analytic maturity is also a factor. As analytics move up the maturity curve (Figure 1), they transition from descriptive to prescriptive-no longer detecting issues but predicting them and automatically giving recommended action on how to proceed. All told, the faster operators can iterate, the faster they can create a high quality, prescriptive analytic-which ultimately delivers greater business value. This also highlights the importance of having an infrastructure that supports analytic services, giving operators functionalities like a sandbox to create and test analytics, as well as the mechanism to rapidly deploy and monitor their quality metrics.

Even the more advanced analytics require that operators further investigate the alert after an initial trigger. Having an application that captures that initial diagnosis, collaboration, and issue resolution and can feed these inputs back into the analytic development process is an important means of moving analytics to the prescriptive phase. When coupled with an on-premise digital infrastructure that runs advanced software modeling, high quality prescriptive analytics will eventually lead to machine learning and full process automation as exposure to data increases-this ultimately will create the fastest time to economic value.

Advanced Software Modeling. Incorporating high fidelity physics-based models, or “digital twins,” of plant components into controls and connecting them at a system level opens up an operational envelope not otherwise available. These models are the backbone of adaptive control strategies that protect assets and enhance operation. For example, performance, combustion and lifing models can be used to operate gas turbines closer to design boundaries to meet a specific desired outcome in terms of efficiency or output with greater reliability real time. These advanced software models can be applied at different levels of the architecture, depending on the functional objective: fleet analytics, process optimization or real time machine control. When models are linked across the assets at the plant level, they can be used to reach an optimal integrated performance, such as the fastest or most efficient start for a combined cycle plant.

Intuitive Applications. Applications are a way to connect people and technology to achieve business outcomes. As discussed, they provide a way to take the domain knowledge from operators and feed it into the analytic development process. Additionally, applications enable the visualization of a wide variety and quantity of data across multiple assets, enabling operators to work with the data through an intuitive interface and potentially uncover new insights. Because these applications leverage cloud technology, they enable operators to seamlessly collaborate from anywhere. Finally, when advanced software models and controls are coupled with prescriptive analytics and an intuitive application interface, operators are armed to drive better outcomes than ever before. These factors ultimately enable more effective decision making across a wide variety of daily business processes.

Business Value. Equipped with analytics and applications, power generators now have opportunity to derive additional business value across their operations. Several GE customers have helped pioneer a variety of both cloud-based and on-premise applications, improving the reliability and operability of assets in today’s complex operating environments.

Bord Gais. The 445-megawatt Whitegate gas combined-cycle power plant, owned by Bord Gà¡is Energy, is located 25 miles east of the city of Cork, and provides power to 10% of Ireland. With European government regulations demanding more renewable energy production, in turn creating a greater need for reliable, on-demand generation capacity, Bord Gà¡is Energy understood it needed to prepare the Whitegate station for future grid challenges.

  • Bord Gà¡is. Energy required a solution for condition-based monitoring at the Whitegate plant to ensure continuous operation toward no unplanned downtime. They chose GE’s Asset Performance Management (APM) solution providing a single, consolidated view of plant performance. The solution is powered by GE’s enterprise platform Predix. The Whitegate implementation of APM on GE’s Predix platform reduced plant downtime and plant operating costs. With APM, early warnings of failure mechanisms using 300 algorithms detect when plant components are about to fail, allowing for more efficient outage management. The integrated solution has created a €2,28M positive financial impact from cost savings and cost avoidance without any plant unavailability due to covered equipment and 21 additional “catches” by the system.
  • RasGas. RasGas is one of the worlds’ premier integrated liquefied natural gas (LNG) enterprises that transformed a regional resource into a key component of the global energy mix. RasGas is a Qatari joint stock company with more than 3,000 employees, owned by Qatar Petroleum (70 percent) and ExxonMobil (30 percent). Qatar remains the largest LNG exporter, providing 77 MTA to the market, which is roughly one-third of global supply. The LNG production at RasGas in Ras Laffan, Qatar, consists of seven LNG production trains with an approximate capacity of 37MM Tons a year. RasGas is focused on cost and value optimization to reduce overall expenditures and enhance efficiency by improving plant reliability and availability without compromising safety, health and the environment. The initiative at RasGas began in late 2014 with a pilot for early detection of equipment or system failures and production optimization for selected units of three (3) LNG trains. This covered GE & non-GE equipment with GE’s Asset Performance Management (APM) solution, built on the Predix cloud platform, using machine data sensors, predictive analytics and process optimization. GE’s APM solution empowers RasGas with asset anomaly detection through a unified user experience covering both GE and non-GE assets, providing alerts, alarms, historical analysis with the visibility into asset performance and health. The intention of RasGas’s APM analytic solution was to reduce unplanned downtime, improve productivity and reliability, and to move from reactive to predictive maintenance for rapid recovery. The GE team worked closely with the RasGas team to develop and identify opportunities for a pilot project to detect early failures and production optimization initiatives by leveraging a consolidated store of collected machine sensor data, analytic software and a platform that provides a plant-wide view. By mid-2015, the pilot project had demonstrated initially that early failures and process optimization opportunities can be detected with the use of analytics with the ability to identify areas to optimize and reduce waste.
  • Électricité de France (EDF). In the summer of 2016, GE inaugurated the world’s most efficient gas turbine combined-cycle power plant (62.2 percent) in Bouchain, France in association with Électricité de France (EDF)-a 609MW 1×1 9HA.01 plant. This milestone demonstrates a new era of power generation technology and digital integration possible for power plant projects. The project began by harnessing the more than five terabytes of data collected from the HA test facility in Greenville, South Carolina, where the 9HA was tested in a full scale, full load test cell. Learnings from the GE fleet were then used to develop extreme test scenarios to exercise the machine’s design boundaries. The findings resulted in refinements to the advanced software models used for controlling the 9HA gas turbine, effectively expanding the operating envelope and enabling the industry leading performance demonstrated. GE pioneered the use of model-based control software that precisely models the machine’s physics, a technique instrumental in the performance and operability growth of the industry leading F-Class gas turbine and one implemented at the onset with the HA platform. The Bouchain plant specifically demonstrates the benefits a digitally-integrated plant can provide. For example, to provide a combined cycle plant capable of reaching full power in less than 30 minutes, this advanced control technique must manage in a coordinated fashion the turbine thermal stresses, steam process stability, emissions and output during the start sequence. The full automation of the power island not only enables fast plant start maneuvers but also helps reduce process variances and improves start reliability. To achieve these outcomes, Bouchain utilizes a state-of-the-art user interface that brings into focus the critical components of an operator’s task while filtering out distracting information. For example, the HMI organizes and presents the supporting information for the disposition workflow as important alerts emerge; this newly designed user interface can reduce nuisance alarms by as much as 80 percent. Furthermore, this automation framework helps operators establish a stable process, one that is predictable so that improvement opportunities can be quickly identified and implemented.

Summary

Today’s environment requires that operators rethink the basic definition of how technology adds value to the business. Equipped with an increasingly dynamic set of prescriptive analytics and a larger operating envelope, accessed through the right application and supported by an industrial-grade platform, businesses now have the ability to move from using data for information to instead using it to drive towards optimization. This is a continuously evolving process, not only because it takes time to infuse the analytics and applications with the domain knowledge and expertise, but also because business objectives and operating environments are rarely stagnant. An ecosystem that allows for rapid iteration of analytics provides the flexibility and resources needed for organizations to continuously adapt to all the complexities-both today and in the future.

]]>
Fast Start Combined Cycles: How Fast is Fast? https://www.power-eng.com/emissions/fast-start-combined-cycles-how-fast-is-fast/ Fri, 10 Mar 2017 00:41:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/features/fast-start-combined-cycles-how-fast-is-fast By Mike Eddington, Mark Osmundsen, Indrajit Jaswal, Jason Rowell, and Brian Reinhart

Cane Run, a 640-MW natural gas combined cycle unit owned and operated by Louisville Gas & Electric, was completed in June 2015 and features two SGT6-5000Fee gas turbines from Siemens, a Siemens SST6 5000 steam turbine and a heat recovery steam generator from Vogt Power International. Photo courtesy: Black & Veatch

In recent years, the term “fast start” has become commonplace in the power generation industry. Specifically, many new combined cycle units being conceptualized or designed incorporate elements of fast start. However, this term can take on many meanings.

Various solutions exist to address specific requirements, depending on what is driving the need for faster startup times. Is the driver reduced emissions, improving plant dispatch, meeting ancillary services or a combination of factors? This article explores these drivers for fast-start plants and outlines the differences in plant design requirements, initial capital and operations and maintenance (O&M) costs for the various levels of fast-start capability.

Conventional Versus FAST

Generally, conventional start combined cycle units are restricted by heat recovery steam generator (HRSG), steam turbine generator (STG) and interconnecting balance-of-plant (BOP) equipment design. The combustion turbine generator (CTG) is required to follow a restricted load profile so that excessive exhaust energy is not provided to the HRSG and, subsequently, the STG. Fast-start combined cycle plants aim to disconnect (to various extents) the CTG loading from the STG.

In addition, HRSGs must adhere to manufacturer and design-specific temperature gradient limits (how fast the temperature can rise in the evaporator drums). Because of this, the amount of CTG exhaust energy (temperature and mass flow) needs to be controlled to prevent exceedance of this temperature gradient restriction.

In conventional start combined cycle facilities, the interconnecting BOP equipment, including major steam piping and steam turbine bypass systems, may restrict the rate at which steam can be introduced to the steam turbine. During cold start conditions (typically greater than 72 hours following plant shutdown), the major steam lines may reach a cold state. To prevent condensation carry-over to the steam turbine, the major steam lines need to be warmed. This can add significant time to the plant startup sequence.

The steam turbine is typically the most restrictive element of combined cycle startup because of the large thermal inertia. During startup, the steam turbine casing thermally expands at a slower rate than the rotor and blades. To prevent blade rubbing and turbine degradation, the rate at which steam energy is admitted is restricted. In conventional combined cycle plants, this restriction in steam temperature and mass flow is controlled by slowing down the load ramp rate of the combustion turbine.

The fast-start combined cycle unit is designed to remove these bottlenecks and allow the plant to load faster. The figure on this page provides a high-level schematic comparing a CTG load path for a 1×1 conventional combined cycle to that of a fast-start combined cycle.

FAST-START DRIVERS

To determine which elements of fast-start combined cycles should be applied to a facility, it is critical to understand what is driving the need. How fast does the facility need to be?

Typical drivers for fast-start combined cycle plants are as follows:

  • Air permit constraints on emissions (nitrogen oxides [NOx], carbon monoxide [CO], particulate matter [PM], etc.) per start event.
  • Carbon dioxide [CO2] emissions limitations (in pounds per megawatt-hour [lb/MWh]).
  • Reduction in time to achieve stack emissions compliance (minimum emissions-compliant load).
  • Reduction in the time to reach dispatched load.
  • Startup fuel consumption reduction.
  • Ancillary services for non-spinning reserve to increase revenue.

Conventional Versus Fast-Start CTG Load

Air Permit Compliance

Depending on the air permitting requirements of the facility, startup emissions (typically NOx, CO and PM) may be restricted on a pounds per event or annual basis. To reduce the amount of emissions generated during startup, it is beneficial to achieve stack emissions compliance as soon as possible following combustion turbine ignition.

Cane Run Unit 7 in western Louisville can achieve a quick startup and shutdown to meet emission limits. Photo courtesy: Black & Veatch

Improved Dispatch

Fast-start combined cycle plants can improve dispatch characteristics, depending on the market. Many energy markets are now providing non-spinning reserve incentives. Dispatch calls are made every five minutes and require the unit to be on line within 10 minutes. The amount of time required for notification to serve load for a fast-start combined cycle is significantly less than the time required for a conventional start combined cycle or thermal plant. Because of this, if the plant operates in a market with significant load generation or demand volatility (e.g., significant renewable assets), fast-start attributes enable the facility to be called upon more frequently and at higher load, thereby increasing revenue.

Design Features OF FAST-START COMBINED CYCLE plants

Compared to conventional start plants, fast-start plant designs include specific measures to address equipment life expenditure resulting from the cycling operation and to improve reliability. In addition, there are also differences between fast-start plant designs depending on whether the intent is to minimize startup emissions or to generate load as quickly as possible. The following paragraphs address the differences in equipment caused by the additional design considerations for fast start.

Combustion Turbine

To keep capital costs low, conventional start plants typical employ a single CTG starting system shared among multiple turbines. Fast-start plants designed to minimize startup emissions may use a similar approach. Plants designed for fast starts to rated CTG load, on the other hand, require individual start systems to support simultaneous starts (i.e., all CTG/HRSG trains starting together).

HRSG

HRSGs designed for conventional start plants typically require holds at low combustion turbine loads to gradually warm thick-walled components (such as the high-pressure [HP] evaporator drum) prior to ramping. The unrestricted CTG startup in fast-start plants, on the other hand, can subject cold HRSG components such as superheaters and reheaters to rapid heating. Large thermal stresses can be produced by the differential expansion of the tubes within the HRSG. HRSG designs in such plants must be capable of accommodating the rapid change in temperature and flow of flue gas generated by the startup and load ramping of advanced class combustion turbines.

Stack Damper and Insulation

HRSGs can lose heat during shutdowns because of airflow from the natural draft created by the stack. Blocking the flow through the HRSG stack using a stack damper and insulating the stack up to the stack damper will minimize heat loss when the unit is offline. Maintaining the HRSG in a hot condition is critical for reducing startup time.

Natural Gas Purge Credits

Purging of the CTG and HRSG is required according to National Fire Protection Association (NFPA) 85 to ensure a safe light-off during startup. Conventional units typically include a CTG/HRSG purge as part of the startup sequence, which leads to a longer startup period. Fast-start plants avoid the startup purge through a purge credit. This method performs the purge during the CTG shutdown process and requires additional provisions to ensure isolation of fuel from the gas turbine and HRSG duct burners and ammonia from the selective catalytic reduction (SCR) system.

Steam Turbine

Multi-casing steam turbines with separate HP, intermediate-pressure (IP) and low-pressure (LP) sections can improve startup but can increase cost. STG designs for fast-start plants feature optimized casing designs to reduce the thermal stress during startup and rapid load changes. The use of higher grade material may be employed in the HP and IP casings and valves to reduce component thickness. Other features include a fully automated turbine startup and shutdown control system and integral rotor stress monitor. The rotor stress monitor is typically capable of limiting or reducing the steam turbine load or speed increase and is designed to trip the turbine when the calculated rotor stresses exceed allowable limits.

Bypass System

Steam turbine bypass systems in conventional start units are typically little used during startup after the STG is on load. CTG load ramping is low enough so that the STG can swallow all the additional steam generated in the HRSGs. In fast-start plants, the thermal energy released by the combustion turbine during startup is significantly higher, and excess steam must be managed. HRSGs with cascaded steam turbine bypass systems and condensers designed for 100 percent steam dump capability are required at a minimum. Fast-start plants designed for fast CTG loading may require an additional bypass system to dump HP steam directly into the condenser.

Auxiliary Boiler

Conventional start plants which do not include an auxiliary boiler require additional time during the startup process to establish condenser vacuum. Fast-start plants may need an auxiliary boiler, but this need must be evaluated on a case-by-case basis. An auxiliary boiler provides sparging steam for the HRSG (to maintain warm drums) and condenser and seal steam for the steam turbine to maintain condenser vacuum while the unit is offline.

Terminal Steam Attemperators

Conventional start plants hold the CTG load during startup as needed to meet the STG startup steam temperature requirements. Fast-start plants decouple the CTG/HRSG startup from the STG startup by using terminal attemperators at the HRSG outlet for meeting STG startup steam temperature requirements, irrespective of CTG/HRSG load. This allows the STG to come on line independently from the CTG and HRSG. As a result, the plant can increase load more quickly.

Automated Startup Sequence

Compared to a manual or semi-automated startup control system used in a conventional start plant, fast-start plants typically utilize a fully automated control system. As a result, more plant instrumentation is required in automated plants to allow the plant control system to monitor system status, minimize times between sequential steps and provide consistent startups.

CAPITAL AND O&M COSTS

Capital costs are higher for fast-start plants than they are for plants designed for conventional starts. Additional costs that must be considered are requirements for a more flexible HRSG (e.g., header returns, tube to header connections, harps per header limits), terminal attemperators and associated systems; more flexible steam piping; improved steam piping drain systems, improved bypass system and controls integration, and requirements for auxiliary steam. Because of the range of possible additions, the capital cost increase must be evaluated on a case-by-case basis.

Various starting regimes require different levels of additional features. For example, a fast-start plant designed to minimize emissions will require fewer flexible design features than a fast-start plant designed for rated load. Because of the large variability in fast-start requirements and potential features, capital cost requirements should be evaluated on a case-by-case basis.

O&M costs are also higher for fast-start plants. However, other than the combustion turbine maintenance factors, the BOP maintenance cost increases are expected to be low relative to the capital cost increases, as long as the plant’s start regime is commensurate with its fast start design features.

Conclusion

While the definition of a fast-start combined cycle unit can be vague, it is important to understand the key drivers that may influence plant design. After the key drivers are known, a host of cycle design options that impact operations, reliability and cost should be considered. Since there are no one-size-fits-all solutions, it is recommended that fast-start capabilities be determined early in the conceptualization and design phases of a project.

Author

Mike Eddington is a senior consultant at Black & Veatch. Mark Osmundsen is manager of Thermal Performance & Technologies, America’s, at Black & Veatch. Indrajit Jaswal is thermal performance engineer at Black & Veatch. Jason Rowell is turbine technologies manager at Black & Veatch. Brian Reinhart is manager of Technology Assessments & Technical Due Diligence at Black & Veatch.

]]>
SCR Performance https://www.power-eng.com/coal/boilers/scr-performance/ Fri, 10 Mar 2017 00:40:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/features/scr-performance At Duke Energy’s Gibson Generating Station in Owensville, Indiana, SBS Injection® technology has been used for SO3 control on all five units since 2005. Photo courtesy: AECOM

SO3 Mitigation to Reduce Emissions & Operating Costs

By Sterling Gray, Jim Jarvis, Chad Donner and Josh Estep

As a result of the Cross-State Air Pollution Rule (CSAPR) and the Ozone National Ambient Air Quality Standard (NAAQS), utilities such as Duke Energy are faced with the need to achieve further reductions in annual and ozone-season NOx emissions.

Consequently, utilities must develop ways to maximize the value and performance of their SCR system equipment. Two strategies that can cut NOx emissions include boosting the percentage NOx reduction efficiency during periods of higher-load operation and developing the ability to keep their SCR systems in service during reduced-load operation.

Implementation of these strategies is hampered by the presence of SO3 in the flue gas. During higher-load operation, SO3 produced in the boiler and by the SCR catalyst itself forces operation with low ammonia slip levels (typically 2 ppm or less) to avoid air heater fouling from ammonium bisulfate (ABS) deposition. Although many variables are involved, the ammonia slip constraint effectively caps the NOx reduction efficiency for a given SCR system configuration. Similarly, ABS deposition within the catalyst due to “capillary condensation” dictates the minimum operating temperature (MOT) for the SCR system and, therefore, the minimum reduced-load operating condition where ammonia can be injected for NOx reduction. The ability to operate a unit at the lowest possible load – with the SCR in service – helps minimize losses during periods of negative power pricing. This allows the unit to be dispatched and avoids operating costs associated with unit start-ups and shutdowns.

SO3 removal upstream of the air heater, and ideally upstream of the SCR reactor itself, is becoming an increasingly important part of both the higher- and reduced-load strategies for reducing NOx emissions. For higher-load operation, the concept is to reduce SO3 to very low levels at the air heater inlet. This relieves the constraint on ammonia slip because there is not enough SO3 available to form appreciable amounts of ABS in the air heater. With the ammonia slip constraint relaxed, modest increases in ammonia slip are possible, which allows the NOx reduction efficiency to be increased.

If a reduction in NOx emissions is not necessary for environmental compliance, then an alternative approach for implementing an elevated ammonia slip strategy is to instead operate the SCR reactor at a lower average reactor potential. This might be accomplished, for example, by operating with two catalyst layers instead of three, or alternatively, through less frequent catalyst replacement. Over time, these approaches result in a lower catalyst consumption rate and an appreciable reduction in life-cycle catalyst costs.

The ability to operate with elevated ammonia slip and without air heater fouling is also helpful when there are local variations in the ammonia-to-NOx ratio at the SCR inlet. These variations might occur due to variations in the local NOx concentration, an inability to fully “tune” ammonia injection, variations in local gas flow, etc. When these conditions exist, it may be more cost-effective for the utility to control air heater fouling by reducing SO3 rather than ammonia slip. The lack of air heater fouling has been demonstrated at plants using the SBS Injection process for SO3 mitigation during both short-term testing and longer-term operation at slip levels well above 2 ppm.

Minimum SCR Operating Temperature versus Flue Gas SO3 Concentration – 1

Pre-SCR SBS Injection as Implementd on Unit 5 at Gibson Generating Station – 2

Indeed, the ability to tolerate elevated ammonia slip or to adopt an elevated ammonia slip operating strategy may more often be constrained by the presence of higher ammonia levels in the water and scrubber solids streams than by air heater fouling.Reducing SO3 to very low levels also helps minimize ABS condensation within the SCR catalyst when operating at reduced-load (low temperature) conditions.

With less SO3 present, the MOT decreases, which allows the SCR system to stay in operation at lower-load conditions where ammonia injection would not otherwise be possible.

Duke Energy employs both dry and wet sorbent injection technologies at their plants for SO3 control.

At Duke Energy’s Gibson Station in Owensville, Indiana, the SBS Injection® technology has been used for SO3 control on all five units since 2005.

In the period between 2009 through 2014, the plant relocated the sorbent injection equipment from downstream of the air heater to upstream of the SCR reactors. As the equipment on each unit was relocated, this “pre-SCR” SO3 mitigation capability was used to expand the operating range of the SCR reactors to keep ammonia injection in service at lower loads.

Duke Energy recently performed testing to further leverage this capability to allow even lower-load operation of the SCR reactors while maintaining high NOx reduction efficiencies.

This testing included a bench-scale evaluation, conducted by the SCR catalyst supplier, and on-site testing incorporating the operation of both the SCR and SO3 mitigation systems.

Although Duke Energy has an interest in operating at higher percentage NOx reduction efficiencies (via elevated ammonia slip operation), the primary focus of the testing was directed towards enhanced operation of their SCR systems at very low load conditions.

The results of the testing were favorable, and Gibson Station has again revised their SCR system operating guidelines. The new guidelines allow full ammonia injection (85 percent NOx reduction efficiency) at lower loads than ever before. Duke Energy’s goal is to maximize the value of their emission control system investments to meet ever-changing emission control and economic challenges.

The Minimum Operating Temperature Issue

The presence of SO3 in the flue gas often dictates the minimum operating temperature (MOT) of the SCR system during reduced-load operation. When SO3 is present, the reactor temperature is typically maintained above the minimum operating temperature when ammonia is being injected to avoid ABS condensation within the catalyst pores. In practice, the reactor can be operated below the MOT for short periods of time if these periods are followed by operation at higher reactor temperatures. Nonetheless, operation below the MOT has the potential for both short-term and long-term impacts to catalyst performance.

The consequences of an MOT limitation can be significant. If the minimum load with the SCR system in service is higher than the minimum load for the boiler, then power producers may be forced to operate at higher than desired loads during periods of low or negative power pricing, just to keep the SCR systems in service. In some cases, operating costs may increase due to unit shut down and startup costs, or the unit may even be idled. With the trend towards reduced capacity factors for many coal-fired boilers, the ability to keep the SCR system in service at the lowest-possible load conditions is a significant economic benefit.

Evolution of SCR Operation and NOx Reduction Goals for Gibson Unit 1 – 3

Full-Scale Test Results from Unit 1 at Gibson – 4

The minimum operating temperature for SCR catalyst is a function of the concentrations of both ammonia and SO3, and the tendency for ABS formation within the catalyst is the greatest near the inlet of the SCR where the ammonia concentration is the highest. Figure 1 depicts the relationship between the minimum operating temperature and the concentration of SO3 in the flue gas. The relationship is a function of many variables, including the SCR inlet NOx concentration, the desired percentage NOx removal, the type of catalyst, and other variables; thus, the minimum operating temperature is shown as a range in Figure 1. However, the figure illustrates a key point – the minimum operating temperature can be significantly reduced if the SO3 can be reduced to very low levels. Thus, SO3 mitigation upstream of the SCR reactor allows full or at least partial NOx reduction at significantly lower boiler loads relative to what would be possible without SO3 mitigation.

In theory, SCR performance enhancement should be possible through SO3 reduction via either wet or dry sorbent injection. Duke Energy has both types of systems and is exploring the benefits available through SO3 reduction at the air heater inlet and/or SCR inlet. In the case of the SBS Injection technology, the process can be installed at locations along the flue gas path from the economizer outlet to scrubber inlet. However, most of the recent installations have been installed upstream of the SCR, and at the present time, the process has been applied at the pre-SCR location on 14 units. In many of those applications, minimum operating temperature was a factor in selecting the injection location.

In the pre-SCR configuration, the reagent injected upstream of the SCR is intended to control the boiler SO3, as well as the SO3 produced by the SCR catalyst. At the inlet to the SCR, however, only the boiler SO3 is present. Consequently, the concentration of the reagent is very high relative to the concentration of the SO3, and the SO3 concentration at this critical location can be reduced to very low levels. As shown in Figure 1, this is exactly what is needed to achieve significant reductions in MOT.

Reduced-Load SCR Performance Enhancement

Duke Energy’s Gibson Station consists of five, 675MW units firing 4 to 6 lb/mmBtu coal. Each unit is equipped with a high-dust SCR system (3 catalyst layers), horizontal-shaft air heaters and cold-side ESPs. In 2005, the plant installed the SBS Injection SO3 mitigation technology downstream of the air heaters on all five units. In the period from 2006 to 2008, work sponsored by a consortium of utilities demonstrated the feasibility of injecting sodium-based reagents upstream of the SCR. Based on the favorable results from this testing, Gibson elected to move the reagent injection location upstream of the SCR reactors on Unit 5. Figure 2 shows a diagram of the SCR system on Unit 5. On this unit, the Par Mixers that were originally part of the SCR system design were removed to make room for the SBS system injection lances. Similar conversions were implemented on the remaining units, and the final conversion was completed in 2014. All five units are now operated in the pre-SCR configuration with soda ash reagent injection upstream of the SCR reactors.

Once the conversions were completed, Gibson used the pre-SCR SO3 mitigation capability to operate the SCR’s with ammonia in service at lower loads and temperatures than were permitted prior to the conversions. For example, prior to the relocation on Unit 1, the SCR system was operated at the design minimum operating temperature was 622°F. After relocating the SO3 mitigation system to the pre-SCR location, a phased injection approach was implemented:

  • 85% NOx reduction at temperatures down to 580°F;
  • 50% NOx reduction at temperatures down to 570°F; and
  • 25% NOx reduction at temperatures down to 550°F (about 250 MW).

This strategy was based on the premise that the SO3 concentration at the SCR inlet was nominally 5 ppm (even though test data suggested the actual concentration might be much lower). Over time, this operating strategy has resulted in significantly lower NOx emissions for this unit than would have been possible before the pre-SCR conversion.

During the summer of 2016, Duke Energy conducted testing at several stations to demonstrate the capability to keep the SCR’s in service at even lower load conditions. At Gibson Station, one objective of the testing was to demonstrate full NOx reduction at a minimum boiler load of 200 MW, where the minimum flue gas temperature entering the SCR reactor approaches 500°F. Figure 3 illustrates the goal for that testing for Unit 1 relative to the operation prior to, and after, the pre-SCR conversion.

The program conducted by Duke Energy included SCR pilot testing, which was conducted by Cormetech, along with full-scale testing at several plants. The Cormetech testing confirmed that it is possible to operate an SCR reactor at temperatures as low as 500°F if the SO3 concentration at the SCR inlet can be reduced to very low levels.

Data from testing on Unit 1 at Gibson Station is shown in Figure 4. For enhanced operation at full load, an SO3 concentration of no more than a few ppm at the air heater inlet would be necessary to permit operation with elevated ammonia slip levels. The results show that the SO3 concentration was reduced from about 47 ppm (without SO3 mitigation) to an average of 2.4 ppm (with SO3 mitigation in service). For low-load operation, the SCR inlet SO3 concentration is critical for the purpose of reducing the MOT. On Unit 1, the SO3 concentration at the economizer outlet is higher during low-load operation than at full load, probably as a result of higher excess oxygen concentrations in the flue gas. Nonetheless, the average SO3 concentration at the SCR inlet was reduced to about 0.5 ppm with SO3 mitigation in service. This is an SO3 concentration that is even lower than what was determined to be sufficient during the Cormetech bench-scale testing.

Based on the test results, the plant modified the SCR operating guidelines to be consistent with the goal depicted in Figure 3. This change was implemented near the end of 2016; thus, there is limited operating experience at the present time. However, operating experience on Unit 3 included considerable low-load operation at loads as low as 236 MW. The NOx removal efficiency was maintained at 85 percent with no indication of problems associated with the new SCR operating guidelines.

Summary

Utilities are looking for new strategies to improve the performance of their SCR systems. SO3 mitigation, implemented upstream of the air heater or upstream of the SCR system, offers the opportunity for increased SCR operating flexibility and reduced operating costs. Consequently, utilities are evaluating and implementing alternative operating strategies that maximize the value of their existing emission control systems.

Authors:

Sterling Gray is a Business Development Manager for AECOM’s Process Technologies Group in Austin Texas. Jim Jarvis is a project manager for AECOM. Chad Donner is the Sorbent Injection Subject Matter expert for Duke Energy’s Fleet Consulting Services Organization. Josh Estep is an engineer at Duke Energy’s Gibson Station.

]]>
Cybersecurity: Step One is Collaboration https://www.power-eng.com/emissions/cybersecurity-step-one-is-collaboration/ Fri, 10 Mar 2017 00:39:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/departments/energy-matters/cybersecurity-step-one-is-collaboration by Robynn Andracsek, P.E., Burns & McDonnell and contributing editor

Cybersecurity affects many aspects of our society, but perhaps none as significantly as that of power generation. The daily life of most Americans depends on access to stable and reliable electricity, to the point where uninterruptible power backs up critical infrastructure, such as hospitals and data centers, in order to avoid the slightest electricity interruption. Historically, weather-related events and equipment failures were the most common causes of power interruption; however, times are changing.

When viewed from a national perspective, cybersecurity presents the greatest risk today to power producers. The first step towards mitigating this threat is collaborating with experts such as those at the National Cyber-Forensics & Training Alliance (NCFTA) in Pittsburgh, Pennsylvania. NCFTA is a unique entity in that it is an independent, nonprofit organization that is not part of the Federal Bureau of Investigations (FBI) or the private sector. NCFTA works with these groups in order to share intelligence about cybersecurity.

Cybersecurity represents a threat to more than one sector of our society. As companies and systems switch to online access and cloud storage of information, the exposure to cyber threats increases. Who hasn’t had a bank card or website account compromised by information theft? However, the stakes are higher when it comes to specific threats targeting energy production.

Cybercrime cannot be solved by a single organization. The Cyber Initiative and Resource Fusion Unit (CIRFU) of the FBI works with NCFTA. Cybercrime is different from other types of crime such as bank robbery because it takes place in virtual spaces. It is with the help of the private sector that the FBI can find cyber criminals and help keep individual networks safe. The FBI seeks the people behind the attacks in order to prosecute them. CIRFU serves as a bridge between the targets of cybercrime (such as utilities) and the government.

Supervisory Special Agent Thomas Grasso of CIRFU recommends that utilities proactively take the following steps:

  • Establish contacts with the local FBI Cyber Action Team and maintain up-to-date phone and email information for them. In the middle of an incident it is imperative to be able to quickly reach skilled assistance.
  • Keep the lines of communication open by sharing information between yourself, the FBI, and other private companies. Stay informed on the latest threats since an attack that occurs at one utility might quickly be tried at another.
  • Establish an incident response plan, then proactively and regularly test this plan. An out-of-date plan provides no protection and can hinder reactions during an event.

Join your local InfraGard chapter.

InfraGard is another partnership between the FBI and the private sector comprised of vetted representatives from businesses representing our nation’s critical infrastructure. According to Special Agent Ronda Schell, Kansas City Division InfraGard Coordinator, “The partnership is a mechanism for law enforcement and the private sector to share information and intelligence in a secure manner. Meetings and briefings are held periodically throughout the year providing an opportunity to discuss threats and matters which could affect their specific companies.” InfraGard is comprised of 84 chapters and more than 54,000 members nationwide representing critical infrastructures such as utilities, banking, healthcare, railroads, and chemical manufacturing. For more information regarding InfraGard and membership visit www.infragard.org.

Cyber security threats are evolving. Ten to 15 years ago, hackers demonstrated proficiency in order to establish street cred. Now, hackers are more likely to engage in cybercrime for profit. Across the board, cyber criminals look to make money by stealing information that they can then sell. These attackers seek new information storage and access points to exploit. A low priority system might provide access to get into a network which could be linked to sensitive controls. In fact, many systems might be online unbeknownst to the company’s cybersecurity team. Management of change plans are needed in order to communicate when new systems are transferred to online access so that internal security specialists can analyze any risk in advance.

Cyber criminals are experts at collaborating amongst themselves and training new members. The guardians of the electric grid need to be just as good at sharing information. The good news is that with the help of partnerships like InfraGard and NCFTA, the private sector is catching up.

]]>
Industry News https://www.power-eng.com/renewables/industry-news-24/ Fri, 10 Mar 2017 00:38:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/departments/industry-news Georgia Power Plans 1,600 MW of Additional Solar Through 2021

Georgia Power announce plans to add up to 1,600 MW of additional solar power to the state through 2021.

The company has already installed 846 MW of solar capacity, including more than two million solar panels, through the end of last year.

The largest additions in 2016 included four solar developments totaling 120 MW at various military bases, with an additional 30 MW project under construction at Marine Corps Logistics Base Albany, and power purchase agreements with hundreds of other small, medium and large-scale installations across the state.

“We continue to focus on introducing new products, services and programs that bring renewable energy to our state without putting upward pressure on rates and ensuring 24/7 reliability for customers,” said Norrie McKenzie, vice president of renewable development for Georgia Power.

Tesla Completes Energy Storage Project for Southern California Edison

Tesla has completed a 20-MW energy storage system that will ensure reliability for Southern California Edison’s grid.

The battery storage system includes two 10-MW systems, each of which contain 198 Tesla Powerpacks and 24 inverters. This system allows it to be connected to two separate circuits at the Mira Loma substation.

SCE selected Tesla for the project during a competitive bid in September.

The California Public Utilities Commission ordered SCE to ramp up its use of energy storage to help make up for the loss of natural gas storage at Southern California Gas Company’s Aliso Canyon storage facility due to a massive leak.

China to Overtake U.S. in Nuclear Production Within a Decade

Though the United States currently has the most nuclear power production, new research indicates China could become the top nuclear producer by 2026.

China is slated to nearly triple its nuclear capacity to just short of 100 GW by 2026, Bloomberg reported from a study by BMI Research.

China added 8 GW of nuclear power last year, bringing the country to a current total of 34 GW. The country now has 20 reactors under construction, with another 176 either planned or proposed, according to the International Atomic Energy Agency.

Georgina Hayden, head of energy and renewable research at BMI, said that China could also develop the ability to export nuclear capabilities and technology abroad.

Bloomberg noted China General Nuclear Power Corp. and China National Nuclear Corp., both of which are state-run, are seeking to sell and build nuclear power plants across the globe to help China with a slowing economy.

Wartsila to Supply Two 50-MW Power Plants for UK Projects

Wartsila announced the company will supply two 50-MW Smart Power Generation plants to Centrica for use in developments within the UK.

The plants, each based on five Wartsila 34SG engines running on natural gas, will be incorporated into facilities at Brigg in northeast Lincolnshire and Peterborough in Cambridgeshire. The plants are expected to enter operation in 2018.

“Centrica’s decision to go with our technology is a testament to the fact that our Smart Power Generation technology plays a key role in the UK power system,” said Bent Iversen, Business Development Manager at Wartsila Energy Solutions. “It shows that flexibility is needed and rewarded by the market.”

Including these plants, Wartsila now has over 250 MW of power generation in the UK.

Entergy Breaks Ground on St. Charles Power Plant

Entergy held an official groundbreaking for its 980-MW St. Charles Power Station in St. Charles Parish, Louisiana.

The $869 million facility is being built to replace other aging facilities, and should become operational in mid-2019, the New Orleans Times-Picayune reported.

Additionally, the St. Charles Power Station will help meet growing power needs due to various new chemical industry and manufacturing infrastructure developments.

Construction will employ 700 workers, while the finished power plant will employ 27 permanently. Entergy will pass on construction cost to its customers, who will see an average monthly bill increase of $1.92 starting in 2020.

Mitsubishi Hitachi Power Systems will supply two digitally-enhanced M501GAC gas turbines for the station.

AEP Completes Sale of Four Power Plants

American Electric Power announced it has completed the sale of four competitive power plants to Lightstone Generation LLC, a joint venture of Blackstone and an affiliate of ArcLight Capital Partners.

The sale, announced Sept. 14, includes 5,200 MW of generation capacity for $2.1 billion.

The plants include:

  • Lawrenceburg Generating Station, 1,186 MW natural gas, Lawrenceburg, Indiana
  • Waterford Energy Center, 840 MW natural gas, Waterford, Ohio
  • Darby Generating Station, 507 MW natural gas, Mount Sterling, Ohio
  • Gen. James M. Gavin Plant, 2,665 MW coal, Cheshire, Ohio

Ten Nuclear Plants Came Online in 2016, 55 under construction

Ten new nuclear reactors came online in 2016, half of which were in China, according to the World Nuclear Industry Status Report.

Of the remaining five, one each were in India, Pakistan, Russia, South Korea and the United States. The new Watts-Bar 2 reactor by the Tennessee Valley Authority was activated 43 years after the start of construction, which is a world record in project longevity.

Including shutdowns of two reactors, including the Fort Calhoun reactor, and the restart of two Japanese reactors, there are now 406 operating nuclear reactors in the world, up from 396 one year ago. The United States still has the largest operating fleet at 99.

Only three new reactors broke ground in 2016, two in China and one in Pakistan. Eight began construction in 2015.

Currently 55 reactors are under construction in 13 countries, with 35 of them behind schedule. China alone has 21 reactors under development.

Siemens Installs 8-MW Wind Turbine Prototype

Siemens announced it has installed the latest version of its offshore direct-drive wind turbine at a test center in à˜sterild, Denmark.

The SWT-8.0-154 prototype can generate 8 MW with its 154-meter rotor, and was installed on a steel tower with a hub height of 120 meters.

After mechanical and electrical testing, final certification should come in 2018. The prototype already received safety certification from DNV GL earlier this month.

Now, Siemens enters the final development phase for the new turbine, which allows for 10 percent higher annual energy production than the 7 MW model. The higher rating will be achieved with only a few component upgrades, including a new cooling concept and new control systems.

Natural Gas Generation to Spike Through 2018

Utilities are planning to add 11.2 GW of natural gas capacity in 2017 and 25.4 GW in 2018, according to a report from the U.S. Energy Information Administration.

Should the plants come online as planned, these additions would be the highest since 2005, and represent a capacity increase of eight percent.

The additions could help natural gas retain its title as the primary energy source for power generation in the long term, even if natural gas prices rise moderately as expected.

The upcoming construction follows a five-year trend of net reductions in coal capacity. From 2011 to 2016, coal lost 47.2 GW of capacity, representing 15 percent of the total fleet. These retirements and conversions of coal to natural gas comes due to environmental regulations and the sustained low cost of natural gas. Prices fell from an average of $5 per million BTU in 2014 to $2.78 per million BTU in October 2016.

New York’s First Offshore Wind Project Approved

The Long Island Power Authority has approved a plan to build New York state’s first offshore wind farm 30 miles east of Montauk.

LIPA signed a 20-year power purchase agreement with Deepwater Wind LLC, the developer of the 90-MW, 15-turbine wind project, the Wall Street Journal reported. Construction on the $740 million project will start in 2020 with operations expected to begin in 2022.

Thomas Falcone, CEO of the Long Island Power Authority, said the Deepwater development won’t be the last or largest offshore wind development built near New York.

The Deepwater development was announced during New York Governor Andrew M. Cuomo’s call for the addition of 2.4 GW of offshore wind development earlier this month.

That goal, to be met by 2030, would also include an 800-MW, 79.000-acre wind project 17 miles south of the Rockaway Peninsula. Statoil Wind US LLC won the rights from the U.S. government to lease the area for wind energy in December.

]]>
Scott Pruitt Seeking Certainty for Power Producers https://www.power-eng.com/emissions/scott-pruitt-seeking-certainty-for-power-producers/ Fri, 10 Mar 2017 00:37:00 +0000 /content/pe/en/articles/print/volume-121/issue-3/departments/opinion/scott-pruitt-seeking-certainty-for-power-producers By Russell Ray, Chief Editor

Oklahoma Attorney General Scott Pruitt was confirmed by U.S. lawmakers last month to lead the Environmental Protection Agency. Pruitt’s mission can be described in one word: Reform.

Pruitt, 48, is expected to restore balance between the economic concerns, reliability concerns and environmental concerns of power generation. As we see it, the scale has long been tilted toward the environmental concerns. The imbalance is a byproduct of misguided policies, unreasonable mandates and hardline interest groups.

We think Pruitt will provide pragmatic leadership for an industry in desperate need of a balanced, common-sense approach that recognizes all forms of power, including coal. Under the previous administration, U.S. power producers were effectively barred from building new, highly efficient coal-fired generation in the U.S., a draconian measure that endangers the reliability and affordability of the nation’s power supplies.

Navigating the regulatory maze is a complicated undertaking for power producers nowadays. Developing a sound, cost-effective strategy for compliance has been complicated by layers of new environmental rules and delays in implementation. One misstep can set a project back by years, costing power producers and their customers millions.

In his first address to EPA staff, Pruitt said he wants to end the confusion created by this regulatory chaos.

“Regulations ought to make things regular,” he said. “Regulators exist to give certainty to those that they regulate. Those that we regulate ought to know what’s expected of them, so that they can plan and allocate resources to comply.”

Pruitt described the political rhetoric surrounding his confirmation as a “toxic environment” and urged his critics outside and inside the EPA for civility in debating the issues we face as a nation.

“We ought to be able to get together and wrestle through some very difficult issues and do so in a civil manner,” Pruitt told agency staff. “I seek to be a good listener. I look forward to spending time with you. You can’t lead unless you listen.”

EPA Administrator Scott Pruitt

Pruitt’s acumen for the issues facing power producers was demonstrated as he challenged many of the EPA’s rulemakings as Oklahoma’s attorney general. In July 2014, after the EPA unveiled the Clean Power Plan, Pruitt said the plan’s goals were arbitrary and failed to recognize the capabilities of power producers. He said then the EPA “should use an ‘inside the fence’ approach that allows each state to set emission standards for existing power plants by evaluating each unit’s ability to improve efficiency and reduce CO2 emissions in a cost-effective way.”

Although Pruitt’s nomination was contentious, it was never in jeopardy. The U.S. Senate voted 52-46 in support of Pruitt’s nomination.

As Oklahoma attorney general, Pruitt sued the agency he now leads more than a dozen times, claiming the agency was exceeding its authority. During his confirmation hearing, Democrats questioned his cooperation with Oklahoma’s energy industry in challenging EPA’s efforts to regulate pollutants under the Clean Air Act.

Pruitt told lawmakers the EPA is legally obligated to regulate pollutants under the CAA, but those measures, he said, represented an intrusion into state jurisdiction.

“We must reject as a nation the false paradigm that if you’re pro-energy you’re anti-environment, and if you’re pro-environment you’re anti-energy,” Pruitt told lawmakers.

Now that Pruitt has been confirmed, expect the new administration to enact more policy changes aimed at the producers of oil, natural gas and power generation. His first move will likely be rolling back the previous administration’s plan to curb greenhouse gas emissions, better known as the Clean Power Plan.

“This is a beginning. It’s a beginning for us to discuss certain principles by which I think this agency should conduct itself,” Pruitt said. “I look forward to leading this agency with those principles in mind.”

If you have a question or a comment, contact me at russellr@pennwell.com. Follow me on Twitter @RussellRay1.

]]>
PE Volume 121 Issue 3 https://www.power-eng.com/issues/pe-volume-121-issue-3/ Thu, 02 Mar 2017 04:30:00 +0000 http://magazine/pe/volume-121/issue-3