From 3f848aa900f0c29a93b874d62a1dab30f51c4d98 Mon Sep 17 00:00:00 2001 From: Ed Date: Wed, 6 Mar 2024 19:45:47 -0800 Subject: [PATCH] better formulation of polyroot --- 05-open_channel.Rmd | 5 +++-- docs/flow-in-open-channels.html | 7 ++++--- docs/search_index.json | 2 +- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/05-open_channel.Rmd b/05-open_channel.Rmd index 3e39e31..94db539 100644 --- a/05-open_channel.Rmd +++ b/05-open_channel.Rmd @@ -392,10 +392,11 @@ For a 0.25 m rise, and using $q = Q/b = 2.2/0.5 = 4.4$, combining Equation \@ref $$E_2 = E_1 - 0.25 = 2 + \frac{4.4^2}{2(9.81)(2^2)} - 0.25 = 2.247 - 0.25 = 1.997 ~m$$ From the specific energy diagram, for $E_2=1.997 ~ m$ a depth of about $y_2 \approx 1.6 ~ m$ would be expected, and the flow would continue in subcritical conditions. The value of $y_2$ can be calculated using Equation \@ref(eq:open-rect1): $$1.997 = y_2 + \frac{4.4^2}{2(9.81)(y_2^2)}$$ which can be rearranged to $$0.9967 - 1.997 y_2^2 + y_2^3= 0$$ -Solving a polynomial in R is straightforward using the `polyroot` function and using `Re` to extract the real portion of the solution. +Solving a polynomial in R is straightforward using the `polyroot` function and using `Re` to extract the real portion of the solution (after filtering for non-imaginary solutions). ```{r openrect-poly, message=FALSE, warning=FALSE} -Re(polyroot(c(0.9667, 0, -1.997, 1))) +all_roots <- polyroot(c(0.9667, 0, -1.997, 1)) +Re(all_roots)[abs(Im(all_roots)) < 1e-6] ``` The negative root is meaningless, the lower positive root is the supercritical depth for $E_2 = 1.997 ~ m$, and the larger positive root is the subcritical depth. Thus the correct solution is $y_2 = 1.64 ~ m$ when the channel bottom rises by 0.25 m. diff --git a/docs/flow-in-open-channels.html b/docs/flow-in-open-channels.html index fb0ce01..b5de46b 100644 --- a/docs/flow-in-open-channels.html +++ b/docs/flow-in-open-channels.html @@ -908,9 +908,10 @@

5.6 Flow in Rectangular Channels< \[E_2 = E_1 - 0.25 = 2 + \frac{4.4^2}{2(9.81)(2^2)} - 0.25 = 2.247 - 0.25 = 1.997 ~m\] From the specific energy diagram, for \(E_2=1.997 ~ m\) a depth of about \(y_2 \approx 1.6 ~ m\) would be expected, and the flow would continue in subcritical conditions. The value of \(y_2\) can be calculated using Equation (5.20): \[1.997 = y_2 + \frac{4.4^2}{2(9.81)(y_2^2)}\] which can be rearranged to \[0.9967 - 1.997 y_2^2 + y_2^3= 0\] -Solving a polynomial in R is straightforward using the polyroot function and using Re to extract the real portion of the solution.

-
Re(polyroot(c(0.9667, 0, -1.997, 1)))
-#> [1]  0.9703764 -0.6090519  1.6356755
+Solving a polynomial in R is straightforward using the polyroot function and using Re to extract the real portion of the solution (after filtering for non-imaginary solutions).

+
all_roots <- polyroot(c(0.9667, 0, -1.997, 1))
+Re(all_roots)[abs(Im(all_roots)) < 1e-6]
+#> [1]  0.9703764 -0.6090519  1.6356755

The negative root is meaningless, the lower positive root is the supercritical depth for \(E_2 = 1.997 ~ m\), and the larger positive root is the subcritical depth. Thus the correct solution is \(y_2 = 1.64 ~ m\) when the channel bottom rises by 0.25 m.

A vertical line or other annotation can be added to the specific energy diagram to indicate \(E_2\) using ggplot2 with a command like p1 + ggplot2::geom_vline(xintercept = 1.997, linetype=3). The hydraulics R package can also add lines to a specific energy diagram for up to two depths:

diff --git a/docs/search_index.json b/docs/search_index.json index fd68da4..67602d7 100644 --- a/docs/search_index.json +++ b/docs/search_index.json @@ -1 +1 @@ -[["index.html", "Hydraulics and Water Resources: Examples Using R Preface Introduction to R and RStudio Citing this reference Copyright", " Hydraulics and Water Resources: Examples Using R Ed Maurer Professor, Civil, Environmental, and Sustainable Engineering Department, Santa Clara University 2024-03-06 Preface This is a compilation of various R exercises and examples created over many years. They have been used mostly in undergraduate civil engineering classes including fluid mechanics, hydraulics, and water resources. This is a dynamic work, and will be regularly updated as errors are identified, improved presentation is developed, or new topics or examples are introduced. I welcome any suggestions or comments. In what follows, text will be intentionally brief. More extensive discussion and description can be found in any fluid mechanics, applied hydraulics, or water resources engineering text. Symbology for hydraulics in this reference generally follows that of Finnemore and Maurer (2024). Fundamental equations will be introduced though the emphasis will be on applications to solve common problems. Also, since this is written by a civil engineer, the only fluids included are water and air, since that accounts for nearly all problems encountered in the field. Solving water problems is rarely done by hand calculations, though the importance of performing order of magnitude ‘back of the envelope’ calculations cannot be overstated. Whether using a hand calculator, spreadsheet, or a programming language to produce a solution, having a sense of when an answer is an outlier will help catch errors. Scripting languages are powerful tools for performing calculations, providing a fully traceable and reproducible path from your input to a solution. Open source languages have the benefit of being free to use, and invite users to be part of a community helping improve the language and its capabilities. The language of choice for this book is R (R Core Team, 2022), chosen for its straightforward syntax, powerful graphical capabilities, wide use in engineering and in many other disciplines, and by using the RStudio interface, it can look and feel a lot like Matlab® with which most engineering students have some experience. Introduction to R and RStudio No introduction to R or RStudio is provided here. It is assumed that the reader has installed R (and RStudio), is comfortable installing and updating packages, and understands the basics of R scripting. Some resources that can provide an introduction to R include: A brief overview, aimed at students at Santa Clara University. An Introduction to R, a comprehensive reference by the R Core Team. Introduction to Programming with R by Stauffer et al., materials for a university course, including interactive exercises. R for Water Resources Data Science, with both introductory and intermediate level courses online. As I developed these exercises and text, I learned R through the work of many others, and the excellent help offered by skilled people sharing their knowledge on stackoverflow. The methods shown here are not the only ways to solve these problems, and users are invited to share alternative or better solutions. Citing this reference Maurer, Ed, 2023. Hydraulics and Water Resources: Examples Using R, doi:10.5281/zenodo.7576843 https://edm44.github.io/hydr-watres-book/. Copyright This work is provided under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). As a summary, this license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. This is a summary of (and not a substitute for) the license. "],["units-in-fluid-mechanics.html", "Chapter 1 Units in Fluid Mechanics", " Chapter 1 Units in Fluid Mechanics Before beginning with problem solving methods it helps to recall some important quantities in fluid mechanics and their associated units. While the world has generally moved forward into standardizing the use of the SI unit system, the U.S. stubbornly holds onto the antiquated US (sometimes called the British Gravitational, BG) system. This means practicing engineers must be familiar with both systems, and be able to convert between the two systems. These important quantities are shown in Table 1.1. Table 1.1: Dimensions and units for common quantities. Quantity Symbol Dimensions US (or BG) Units SI Units US to SI multiply by Length L \\(L\\) \\(ft\\) \\(m\\) 0.3048 Acceleration a \\(LT^{-2}\\) \\(ft/s^2\\) \\(m/s^{2}\\) 0.3048 Mass m \\(M\\) \\(slug\\) \\(kg\\) 14.59 Force F \\(F\\) \\(lb\\) \\(N\\) 4.448 Density \\(\\rho\\) \\(ML^{-3}\\) \\(slug/ft^3\\) \\(kg/m^3\\) 515.4 Energy/Work FL \\({ft}\\cdot{lb}\\) \\({N}\\cdot{m}=joule (J)\\) 1.356 Flowrate Q \\(L^{3}/T\\) \\(ft^{3}/s\\)=cfs \\(m^{3}/s\\) 0.02832 Kinematic viscocity \\(\\nu\\) \\(L^{2}/T\\) \\(ft^{2}/s\\) \\(m^{2}/s\\) 0.0929 Power \\(FLT^{-1}\\) \\({ft}\\cdot{lb/s}\\) \\({N}\\cdot{m/s}=watt (W)\\) 1.356 Pressure p \\(FL^{-2}\\) \\(lb/in^2=psi\\) \\(N/m^2=Pa\\) 6895 Specific Weight \\(\\gamma\\) \\(FL^{-3}\\) \\(lb/ft^3\\) \\(N/m^3\\) 157.1 Velocity V \\(LT^{-1}\\) \\(ft/s\\) \\(m/s\\) 0.3048 (Dynamic) Viscocity \\(\\mu\\) \\(FTL^{-2}\\) \\({lb}\\cdot{s/ft^2}\\) \\({N}\\cdot{s/m^2}={Pa}\\cdot{s}\\) 47.88 Volume \\(\\forall\\) \\(L^3\\) \\(ft^3\\) \\(m^3\\) 0.02832 There are many other units that must be accommodated. For example, one may encounter the poise to describe (dynamic) viscosity (\\(1~Pa*s = 10~poise\\)), or the stoke for kinematic viscocity (\\(1~m^2/s=10^4~stokes\\)). Many hydraulic systems use gallons per minute (gpm) as a unit of flow (\\(1~ft^3/s=448.8~gpm\\)), and larger water systems often use millions of gallons per day (mgd) (\\(1~mgd = 1.547~ft^3/s\\)). For volume, the SI system often uses liters (\\(l\\)) instead of \\(m^3\\) (\\(1~m^3=1000~l\\)). One regular conversion that needs to occur is the translation between mass (m) and weight (W), where \\(W=mg\\), where \\(g\\) is gravitational acceleration on the earth’s surface: \\(g=9.81~m/s^2=32.2~ft/s^2\\). When working with forces (such as with momentum problems or hydrostatic forces) be sure to work with weights/forces, not mass. It is straightforward to use conversion factors in the table to manipulate values between the systems, multiplying by the factor shown to go from US to SI units, or dividing to do the \\[{1*10^{-6}~m^2/s}*\\frac{1 ~ft^2/s}{0.0929~m^2/s}=1.076*10^{-5} ~ft^2/s\\] Another example converts between two quantities in the US system: 100 gallons per minute to cfs: \\[{100 ~gpm}*\\frac{1 ~cfs}{448.8 ~gpm}=0.223 ~cfs\\] The units package in R can do these conversions and more, and also checks that conversions are permissible (producing an error if incompatible units are used). units::units_options(set_units_mode = "symbols") Q_gpm <- units::set_units(100, gallon/min) Q_gpm #> 100 [gallon/min] Q_cfs <- units::set_units(Q_gpm, ft^3/s) Q_cfs #> 0.2228009 [ft^3/s] Repeating the unit conversion of viscosity using the units package: Example 1.1 Convert kinematic viscosity from SI to Eng units. nu <- units::set_units(1e-6, m^2/s) nu #> 1e-06 [m^2/s] units::set_units(nu, ft^2/s) #> 1.076391e-05 [ft^2/s] The units package also produces correct units during operations. For example, multiplying mass by g should produce weight. Example 1.2 Using the units package to produce correct units during mathematical operations. #If you travel at 88 ft/sec for 1 hour, how many km would you travel? v <- units::set_units(88, ft/s) t <- units::set_units(1, hour) d <- v*t d #> 316800 [ft] units::set_units(d, km) #> 96.56064 [km] #What is the weight of a 4 slug mass, in pounds and Newtons? m <- units::set_units(4, slug) g <- units::set_units(32.2, ft/s^2) w <- m*g #Notice the units are technically correct, but have not been simplified in this case w #> 128.8 [ft*slug/s^2] #These can be set manually to verify that lbf (pound-force) is a valid equivalent units::set_units(w, lbf) #> 128.8 [lbf] units::set_units(w, N) #> 572.9308 [N] "],["properties-of-water.html", "Chapter 2 Properties of water (and air) 2.1 Properties important for water standing still 2.2 Properties important for moving water 2.3 Atmosperic Properties", " Chapter 2 Properties of water (and air) Fundamental properties of water allow the description of the forces it exerts and how it behaves while in motion. A table of these properties can be generated with the hydraulics package using a command like water_table(units = \"SI\"). A summary of basic water properties, which vary with temperature, is shown in Table 2.1 for SI units and Table 2.2 for US (or Eng) units. Table 2.1: Water properties in SI units Temp Density Spec_Weight Viscosity Kinem_Visc Sat_VP Surf_Tens Bulk_Mod C kg m-3 N m-3 N s m-2 m2 s-1 Pa N m-1 Pa \\(0\\) \\(999.9\\) \\(9809\\) \\(1.734 \\times 10^{-3}\\) \\(1.734 \\times 10^{-6}\\) \\(611.2\\) \\(7.57 \\times 10^{-2}\\) \\(2.02 \\times 10^{9}\\) \\(5\\) \\(1000.0\\) \\(9810\\) \\(1.501 \\times 10^{-3}\\) \\(1.501 \\times 10^{-6}\\) \\(872.6\\) \\(7.49 \\times 10^{-2}\\) \\(2.06 \\times 10^{9}\\) \\(10\\) \\(999.7\\) \\(9807\\) \\(1.310 \\times 10^{-3}\\) \\(1.311 \\times 10^{-6}\\) \\(1228\\) \\(7.42 \\times 10^{-2}\\) \\(2.10 \\times 10^{9}\\) \\(15\\) \\(999.1\\) \\(9801\\) \\(1.153 \\times 10^{-3}\\) \\(1.154 \\times 10^{-6}\\) \\(1706\\) \\(7.35 \\times 10^{-2}\\) \\(2.14 \\times 10^{9}\\) \\(20\\) \\(998.2\\) \\(9793\\) \\(1.021 \\times 10^{-3}\\) \\(1.023 \\times 10^{-6}\\) \\(2339\\) \\(7.27 \\times 10^{-2}\\) \\(2.18 \\times 10^{9}\\) \\(25\\) \\(997.1\\) \\(9781\\) \\(9.108 \\times 10^{-4}\\) \\(9.135 \\times 10^{-7}\\) \\(3170\\) \\(7.20 \\times 10^{-2}\\) \\(2.22 \\times 10^{9}\\) \\(30\\) \\(995.7\\) \\(9768\\) \\(8.174 \\times 10^{-4}\\) \\(8.210 \\times 10^{-7}\\) \\(4247\\) \\(7.12 \\times 10^{-2}\\) \\(2.25 \\times 10^{9}\\) \\(35\\) \\(994.1\\) \\(9752\\) \\(7.380 \\times 10^{-4}\\) \\(7.424 \\times 10^{-7}\\) \\(5629\\) \\(7.04 \\times 10^{-2}\\) \\(2.26 \\times 10^{9}\\) \\(40\\) \\(992.2\\) \\(9734\\) \\(6.699 \\times 10^{-4}\\) \\(6.751 \\times 10^{-7}\\) \\(7385\\) \\(6.96 \\times 10^{-2}\\) \\(2.28 \\times 10^{9}\\) \\(45\\) \\(990.2\\) \\(9714\\) \\(6.112 \\times 10^{-4}\\) \\(6.173 \\times 10^{-7}\\) \\(9595\\) \\(6.88 \\times 10^{-2}\\) \\(2.29 \\times 10^{9}\\) \\(50\\) \\(988.1\\) \\(9693\\) \\(5.605 \\times 10^{-4}\\) \\(5.672 \\times 10^{-7}\\) \\(1.235 \\times 10^{4}\\) \\(6.79 \\times 10^{-2}\\) \\(2.29 \\times 10^{9}\\) \\(55\\) \\(985.7\\) \\(9670\\) \\(5.162 \\times 10^{-4}\\) \\(5.237 \\times 10^{-7}\\) \\(1.576 \\times 10^{4}\\) \\(6.71 \\times 10^{-2}\\) \\(2.29 \\times 10^{9}\\) \\(60\\) \\(983.2\\) \\(9645\\) \\(4.776 \\times 10^{-4}\\) \\(4.857 \\times 10^{-7}\\) \\(1.995 \\times 10^{4}\\) \\(6.62 \\times 10^{-2}\\) \\(2.28 \\times 10^{9}\\) \\(65\\) \\(980.6\\) \\(9619\\) \\(4.435 \\times 10^{-4}\\) \\(4.523 \\times 10^{-7}\\) \\(2.504 \\times 10^{4}\\) \\(6.54 \\times 10^{-2}\\) \\(2.26 \\times 10^{9}\\) \\(70\\) \\(977.7\\) \\(9592\\) \\(4.135 \\times 10^{-4}\\) \\(4.229 \\times 10^{-7}\\) \\(3.120 \\times 10^{4}\\) \\(6.45 \\times 10^{-2}\\) \\(2.25 \\times 10^{9}\\) \\(75\\) \\(974.8\\) \\(9563\\) \\(3.869 \\times 10^{-4}\\) \\(3.969 \\times 10^{-7}\\) \\(3.860 \\times 10^{4}\\) \\(6.36 \\times 10^{-2}\\) \\(2.23 \\times 10^{9}\\) \\(80\\) \\(971.7\\) \\(9533\\) \\(3.631 \\times 10^{-4}\\) \\(3.737 \\times 10^{-7}\\) \\(4.742 \\times 10^{4}\\) \\(6.27 \\times 10^{-2}\\) \\(2.20 \\times 10^{9}\\) \\(85\\) \\(968.5\\) \\(9501\\) \\(3.419 \\times 10^{-4}\\) \\(3.530 \\times 10^{-7}\\) \\(5.787 \\times 10^{4}\\) \\(6.18 \\times 10^{-2}\\) \\(2.17 \\times 10^{9}\\) \\(90\\) \\(965.2\\) \\(9468\\) \\(3.229 \\times 10^{-4}\\) \\(3.345 \\times 10^{-7}\\) \\(7.018 \\times 10^{4}\\) \\(6.08 \\times 10^{-2}\\) \\(2.14 \\times 10^{9}\\) \\(95\\) \\(961.7\\) \\(9434\\) \\(3.057 \\times 10^{-4}\\) \\(3.179 \\times 10^{-7}\\) \\(8.461 \\times 10^{4}\\) \\(5.99 \\times 10^{-2}\\) \\(2.10 \\times 10^{9}\\) \\(100\\) \\(958.1\\) \\(9399\\) \\(2.902 \\times 10^{-4}\\) \\(3.029 \\times 10^{-7}\\) \\(1.014 \\times 10^{5}\\) \\(5.89 \\times 10^{-2}\\) \\(2.07 \\times 10^{9}\\) Table 2.2: Water properties in US units Temp Density Spec_Weight Viscosity Kinem_Visc Sat_VP Surf_Tens Bulk_Mod F slug ft-3 lbf ft-3 lbf s ft-2 ft2 s-1 lbf ft-2 lbf ft-1 lbf ft-2 \\(32\\) \\(1.9\\) \\(62.42\\) \\(3.621 \\times 10^{-5}\\) \\(1.873 \\times 10^{-5}\\) \\(12.77\\) \\(5.18 \\times 10^{-3}\\) \\(4.22 \\times 10^{7}\\) \\(42\\) \\(1.9\\) \\(62.43\\) \\(3.087 \\times 10^{-5}\\) \\(1.596 \\times 10^{-5}\\) \\(18.94\\) \\(5.13 \\times 10^{-3}\\) \\(4.31 \\times 10^{7}\\) \\(52\\) \\(1.9\\) \\(62.40\\) \\(2.658 \\times 10^{-5}\\) \\(1.375 \\times 10^{-5}\\) \\(27.62\\) \\(5.07 \\times 10^{-3}\\) \\(4.40 \\times 10^{7}\\) \\(62\\) \\(1.9\\) \\(62.36\\) \\(2.311 \\times 10^{-5}\\) \\(1.196 \\times 10^{-5}\\) \\(39.64\\) \\(5.02 \\times 10^{-3}\\) \\(4.50 \\times 10^{7}\\) \\(72\\) \\(1.9\\) \\(62.29\\) \\(2.026 \\times 10^{-5}\\) \\(1.050 \\times 10^{-5}\\) \\(56.00\\) \\(4.96 \\times 10^{-3}\\) \\(4.59 \\times 10^{7}\\) \\(82\\) \\(1.9\\) \\(62.20\\) \\(1.790 \\times 10^{-5}\\) \\(9.290 \\times 10^{-6}\\) \\(77.99\\) \\(4.90 \\times 10^{-3}\\) \\(4.67 \\times 10^{7}\\) \\(92\\) \\(1.9\\) \\(62.09\\) \\(1.594 \\times 10^{-5}\\) \\(8.286 \\times 10^{-6}\\) \\(107.2\\) \\(4.84 \\times 10^{-3}\\) \\(4.72 \\times 10^{7}\\) \\(102\\) \\(1.9\\) \\(61.97\\) \\(1.429 \\times 10^{-5}\\) \\(7.443 \\times 10^{-6}\\) \\(145.3\\) \\(4.78 \\times 10^{-3}\\) \\(4.75 \\times 10^{7}\\) \\(112\\) \\(1.9\\) \\(61.83\\) \\(1.289 \\times 10^{-5}\\) \\(6.732 \\times 10^{-6}\\) \\(194.7\\) \\(4.72 \\times 10^{-3}\\) \\(4.77 \\times 10^{7}\\) \\(122\\) \\(1.9\\) \\(61.68\\) \\(1.171 \\times 10^{-5}\\) \\(6.126 \\times 10^{-6}\\) \\(258\\) \\(4.66 \\times 10^{-3}\\) \\(4.78 \\times 10^{7}\\) \\(132\\) \\(1.9\\) \\(61.52\\) \\(1.069 \\times 10^{-5}\\) \\(5.608 \\times 10^{-6}\\) \\(338.1\\) \\(4.59 \\times 10^{-3}\\) \\(4.77 \\times 10^{7}\\) \\(142\\) \\(1.9\\) \\(61.34\\) \\(9.808 \\times 10^{-6}\\) \\(5.162 \\times 10^{-6}\\) \\(438.5\\) \\(4.53 \\times 10^{-3}\\) \\(4.75 \\times 10^{7}\\) \\(152\\) \\(1.9\\) \\(61.16\\) \\(9.046 \\times 10^{-6}\\) \\(4.775 \\times 10^{-6}\\) \\(563.2\\) \\(4.46 \\times 10^{-3}\\) \\(4.72 \\times 10^{7}\\) \\(162\\) \\(1.9\\) \\(60.96\\) \\(8.381 \\times 10^{-6}\\) \\(4.438 \\times 10^{-6}\\) \\(716.9\\) \\(4.39 \\times 10^{-3}\\) \\(4.68 \\times 10^{7}\\) \\(172\\) \\(1.9\\) \\(60.75\\) \\(7.797 \\times 10^{-6}\\) \\(4.144 \\times 10^{-6}\\) \\(904.5\\) \\(4.32 \\times 10^{-3}\\) \\(4.62 \\times 10^{7}\\) \\(182\\) \\(1.9\\) \\(60.53\\) \\(7.283 \\times 10^{-6}\\) \\(3.884 \\times 10^{-6}\\) \\(1132\\) \\(4.25 \\times 10^{-3}\\) \\(4.55 \\times 10^{7}\\) \\(192\\) \\(1.9\\) \\(60.30\\) \\(6.828 \\times 10^{-6}\\) \\(3.655 \\times 10^{-6}\\) \\(1405\\) \\(4.18 \\times 10^{-3}\\) \\(4.48 \\times 10^{7}\\) \\(202\\) \\(1.9\\) \\(60.06\\) \\(6.423 \\times 10^{-6}\\) \\(3.452 \\times 10^{-6}\\) \\(1731\\) \\(4.11 \\times 10^{-3}\\) \\(4.40 \\times 10^{7}\\) \\(212\\) \\(1.9\\) \\(59.81\\) \\(6.061 \\times 10^{-6}\\) \\(3.271 \\times 10^{-6}\\) \\(2118\\) \\(4.04 \\times 10^{-3}\\) \\(4.32 \\times 10^{7}\\) What follows is a brief discussion of some of these properties, and how they can be applied in R. All of the properties shown in the tables above are produced using the hydraulics R package. The documentation for that package provides details on its use. The water property functions in the hydraulics package can be called using the ret_units input to allow it to return an object of class units, as designated by the package units. This enables capabilities for new units to be deduced as operations are performed on the values. Concise examples are in the vignettes for the ‘units’ package. 2.1 Properties important for water standing still An intrinsic property of water is its mass. In the presence of gravity, it exerts a weight on its surroundings. Forces caused by the weight of water enter design in many ways. Example 2.1 uses water mass and weight in a calculation. Example 2.1 Determine the tension in the 8 mm diameter rope holding a bucket containing 12 liters of water. Ignore the weight of the bucket. Assume a water temperature of 20 \\(^\\circ\\)C. rho = hydraulics::dens(T = 20, units = 'SI', ret_units = TRUE) #Water density: rho #> 998.2336 [kg/m^3] #Find mass by multiplying by volume vol <- units::set_units(12, liter) m <- rho * vol #Convert mass to weight in Newtons g <- units::set_units(9.81, m/s^2) w <- units::set_units(m*g, "N") #Divide by cross-sectional area of the rope to obtain tension area <- units::set_units(pi/4 * 8^2, mm^2) tension <- w/area #Express the result in Pascals units::set_units(tension, Pa) #> 2337828 [Pa] #For demonstration, convert to psi units::set_units(tension, psi) #> 339.0733 [psi] For example 2.1 units could have been tracked manually throughout, as if done by hand. The convenience of using the units package allows conversions that can be used to check hand calculations. Water expands as it is heated, which is part of what is driving sea-level rise globally. Approximately 90% of excess energy caused by global warming pollution is absorbed by oceans, with most of that occurring in the upper ocean: 0-700 m of depth (Fox-Kemper et al., 2021). Example 2.2 uses water mass and weight in a calculation. Example 2.2 Assume the ocean is made of fresh water (the change in density of sea water with temperature is close enough to fresh water for this illustration). Assume a 700 m thick upper layer of the ocean. Assuming this upper layer has an initial temperature of 15 \\(^\\circ\\)C and calculate the change in mean sea level due to a 2 \\(^\\circ\\)C rise in temperature of this upper layer. It may help to consider a single 1m x 1m column of water with h=700 m high under original conditions. Since mass is conserved, and mass = volume x density, this is simple: \\[LWh_1\\cdot\\rho_1=LWh_2\\cdot\\rho_2\\] or \\[h_2=h_1\\frac{\\rho_1}{\\rho_2}\\] rho1 = hydraulics::dens(T = 15, units = 'SI') rho2 = hydraulics::dens(T = 17, units = 'SI') h2 = 700 * (rho1/rho2) cat(sprintf("Change in sea level = %.3f m\\n", h2-700)) #> Change in sea level = 0.227 m The bulk modulus, Ev, relates the change in specific volume to the change in pressure, and defined as in Equation (2.1). \\[\\begin{equation} E_v=-v\\frac{dp}{dv} \\tag{2.1} \\end{equation}\\] which can be discretized: \\[\\begin{equation} \\frac{v_2-v_1}{v_1}=-\\frac{p_2-p_1}{E_v} \\tag{2.2} \\end{equation}\\] where \\(v\\) is the specific volume (\\(v=\\frac{1}{\\rho}\\)) and \\(p\\) is pressure. Example 2.3 shows one application of this. Example 2.3 A barrel of water has an initial temperature of 15 \\(^\\circ\\)C at atmospheric pressure (p=0 Pa gage). Plot the pressure the barrel must exert to have no change in volume as the water warms to 20 \\(^\\circ\\)C. Here essentially the larger specific volume (at a higher temperature) is then compressed by \\({\\Delta}P\\) to return the volume to its original value. Thus, subscript 1 indicates the warmer condition, and subscript 2 the original at 15 \\(^\\circ\\)C. dp <- function(tmp) { rho2 <- hydraulics::dens(T = 15, units = 'SI') rho1 <- hydraulics::dens(T = tmp, units = 'SI') Ev <- hydraulics::Ev(T = tmp, units = 'SI') return((-((1/rho2) - (1/rho1))/(1/rho1))*Ev) } temps <- seq(from=15, to=20, by=1) plot(temps,dp(temps), xlab="Temperature, C", ylab="Pressure increase, Pa", type="b") Figure 2.1: Approximate change in pressure as water temperature increases. These very high pressures required to compress water, even by a small fraction, validates the ordinary assumption that water can be considered incompressible in most applications. It should be noted that the Ev values produced by the hydraulics package only vary with temperature, and assume standard atmospheric pressure; in reality, Ev values increase with increasing pressure so the values plotted here serve only as a demonstration and underestimate the pressure increase. 2.2 Properties important for moving water When describing the behavior of moving water in civil engineering infrastructure like pipes and channels there are three primary water properties used used in calculations, all of which vary with water temperature: density (\\(\\rho\\)), dynamic viscosity(\\(\\mu\\)), and kinematic viscosity(\\(\\nu\\)), which are related by Equation (2.3). \\[\\begin{equation} \\nu=\\frac{\\mu}{\\rho} \\tag{2.3} \\end{equation}\\] Viscosity is caused by interaction of the fluid molecules as they are subjected to a shearing force. This is often illustrated by a conceptual sketch of two parallel plates, one fixed and one moving at a constant speed, with a fluid in between. Perhaps more intuitively, a smore can be used. If the velocity of the marshmallow filling varies linearly, it will be stationary (V=0) at the bottom and moving at the same velocity as the upper cracker at the top (V=U). The force needed to move the upper cracker can be calculated using Equation (2.4) \\[\\begin{equation} F=A{\\mu}\\frac{dV}{dy} \\tag{2.4} \\end{equation}\\] where y is the distance between the crackers and A is the cross-sectional area of a cracker. Equation (2.4) is often written in terms of shear stress \\({\\tau}\\) as in Equation (2.5) \\[\\begin{equation} \\frac{F}{A}={\\tau}={\\mu}\\frac{dV}{dy} \\tag{2.5} \\end{equation}\\] The following demonstrates a use of these relationships. Example 2.4 Determine force required to slide the top cracker at 1 cm/s with a thickness of marshmallow of 0.5 cm. The cross-sectional area of the crackers is 10 cm\\(^2\\). The viscosity (dynamic viscosity, as can be discerned by the units) of marshmallow is about 0.1 Pa\\(\\cdot\\)s. #Assign variables A <- units::set_units(10, cm^2) U <- units::set_units(1, cm/s) y <- units::set_units(0.5, cm) mu <- units::set_units(0.1, Pa*s) #Find shear stress tau <- mu * U / y tau #> 0.2 [Pa] #Since stress is F/A, multiply tau by A to find F, convert to Newtons and pounds units::set_units(tau*A, N) #> 2e-04 [N] units::set_units(tau*A, lbf) #> 4.496179e-05 [lbf] Water is less viscous than marshmallow, so viscosity for water has much lower values than in the example. Values for water can be obtained using the hydraulics R package for calculations, using the dens, dvisc, and kvisc. All of the water property functions can accept a list of input temperature values, enabling visualization of a property with varying water temperature, as shown in Figure 2.2. Ts <- seq(0, 100, 10) nus <- hydraulics::kvisc(T = Ts, units = 'SI') xlbl <- expression("Temperature, " (degree*C)) ylbl <- expression("Kinematic viscosity," ~nu~ (m^{2}/s)) par(cex=0.8, mgp = c(2,0.7,0)) plot(Ts, nus, xlab = xlbl, ylab = ylbl, type="l") Figure 2.2: Variation of kinematic viscosity with temperature. 2.3 Atmosperic Properties Since water interacts with the atmosphere, through processes like evaporation and condensation, some basic properties of air are helpful. Selected characteristics of the standard atmosphere, as determined by the International Civil Aviation Organization (ICAO), are included in the hydraulics package. Three functions atmpres, atmdens, and atmtemp return different properties of the standard atmosphere, which vary with altitude. These are summarized in Table 2.3 for SI units and Table 2.4 for US (or Eng) units. Table 2.3: ICAO standard atmospheric properties in SI units Altitude Temp Pressure Density m C Pa kg m-3 0 15.00 101325.0 1.22500 1000 8.50 89876.3 1.11166 2000 2.00 79501.4 1.00655 3000 -4.49 70121.1 0.90925 4000 -10.98 61660.4 0.81935 5000 -17.47 54048.2 0.73643 6000 -23.96 47217.6 0.66011 7000 -30.45 41105.2 0.59002 8000 -36.93 35651.5 0.52579 9000 -43.42 30800.6 0.46706 10000 -49.90 26499.8 0.41351 11000 -56.38 22699.8 0.36480 12000 -62.85 19354.6 0.32062 13000 -69.33 16421.2 0.28067 14000 -75.80 13859.4 0.24465 15000 -82.27 11631.9 0.21229 Table 2.4: ICAO standard atmospheric properties in US units Altitude Temp Pressure Density ft F lbf ft-2 slug ft-3 0 59.00 2116.2 0.00237 5000 41.17 1760.9 0.00205 10000 23.36 1455.6 0.00175 15000 5.55 1194.8 0.00149 20000 -12.25 973.3 0.00127 25000 -30.05 786.3 0.00107 30000 -47.83 629.7 0.00089 35000 -65.61 499.3 0.00074 40000 -83.37 391.8 0.00061 45000 -101.13 303.9 0.00049 50000 -118.88 232.7 0.00040 As with water property functions, the data in the table can be extracted using individual commands for use in calculations. All atmospheric functions have input arguments of altitude (ft or m), unit system (SI or Eng), and whether or not units should be returned. hydraulics::atmpres(alt = 3000, units = "SI", ret_units = TRUE) #> 70121.14 [Pa] 2.3.1 Ideal gas law Because air is compressible, its density changes with pressure, and the temperature responds to compression. These are related through the ideal gas law, Equation (2.6) \\[\\begin{equation} \\begin{split} p={\\rho}RT\\\\ p{\\forall}=mRT \\end{split} \\tag{2.6} \\end{equation}\\] where \\(p\\) is absolute pressure, \\(\\forall\\) is the volume, \\(R\\) is the gas constant, \\(T\\) is absolute temperature, and \\(m\\) is the mass of the gas. When air changes its condition between two states, the ideal gas law can be restated as Equation (2.7) \\[\\begin{equation} \\frac{p_1{\\forall_1}}{T_1}=\\frac{p_2{\\forall_2}}{T_2} \\tag{2.7} \\end{equation}\\] Two convenient forms of Equation (2.7) apply for specific conditions. If mass is conserved, and conditions are isothermal, m, R, T are constant (isothermal conditions): \\[\\begin{equation} p_1{\\forall_1}=p_2{\\forall_2} \\tag{2.8} \\end{equation}\\] If mass is conserved and temperature changes adiabatically (meaning no heat is exchanged with surroundings) and reversibly, these are isentropic conditions, governed by Equations (2.9). \\[\\begin{equation} \\begin{split} p_1{\\forall_1}^k=p_2{\\forall_2}^k\\\\ \\frac{T_2}{T_1}=\\left(\\frac{p_2}{p_1}\\right)^{\\frac{k-1}{k}} \\end{split} \\tag{2.9} \\end{equation}\\] Properties of air used in these formulations of the ideal gas law are shown in Table 2.5. Table 2.5: Air properties at standard sea-level atmospheric pressure Gas Constant, R Sp. Heat, cp Sp. Heat, cv Sp. Heat Ratio, k 1715 ft lbf degR-1 slug-1 6000 ft lbf degR-1 slug-1 4285 ft lbf degR-1 slug-1 1.4 287 m N K-1 kg-1 1003 m N K-1 kg-1 717 m N K-1 kg-1 1.4 "],["hydrostatics---forces-exerted-by-water-bodies.html", "Chapter 3 Hydrostatics - forces exerted by water bodies 3.1 Pressure and force 3.2 Force on a plane area 3.3 Forces on curved surfaces", " Chapter 3 Hydrostatics - forces exerted by water bodies When water is motionless its weight exerts a pressure on surfaces with which it is in contact. The force is function of the density of the fluid and the depth. Figure 3.1: The Clywedog dam by Nigel Brown, CC BY-SA 2.0, via Wikimedia Commons 3.1 Pressure and force A consideration of all of the forces acting on a particle in a fluid in equilibrium produces Equation (3.1). \\[\\begin{equation} \\frac{dp}{dz}=-{\\gamma} \\tag{3.1} \\end{equation}\\] where \\(p\\) is pressure (\\(p=F/A\\)), \\(z\\) is height measured upward from a datum, and \\({\\gamma}\\) is the specific weight of the fluid (\\(\\gamma={\\rho}g\\)). Rewritten using depth (downward from the water surface), \\(h\\), produces Equation (3.2). \\[\\begin{equation} h=\\frac{p}{\\gamma} \\tag{3.2} \\end{equation}\\] Example 3.1 Find the force on the bottom of a 0.4 m diameter barrel filled with (20 \\(^\\circ\\)C) water for barrel heights from 0.5 m to 1.5 m. area <- pi/4*0.4^2 gamma <- hydraulics::specwt(T = 20, units = 'SI') heights <- seq(from=0.5, to=1.5, by=0.05) pressures <- gamma * heights forces <- pressures * area plot(forces,heights, xlab="Total force on barrel bottom, N", ylab="Depth of water, m", type="l") grid() Figure 3.2: Force on barrel bottom. The linear relationship is what was expected. 3.2 Force on a plane area For a submerged flat surface, the magnitude of the hydrostatic force can be found using Equation (3.3). \\[\\begin{equation} F={\\gamma}y_c\\sin{\\theta}A={\\gamma}h_cA \\tag{3.3} \\end{equation}\\] The force is located as defined by Equation (3.4). \\[\\begin{equation} y_p=y_c+\\frac{I_c}{y_cA} \\tag{3.4} \\end{equation}\\] The variables correspond to the definitions in Figure 3.3. Figure 3.3: Forces on a plane area, by Ertunc, CC BY-SA 4.0, via Wikimedia Commons The location of the centroid and the moment of inertia, \\(I_c\\) for some common shapes are shown in Figure 3.4 (Moore, J. et al., 2022). The variables correspond to the definitions in Figure 3.4. Figure 3.4: Centroids and moments of inertia for common shapes Example 3.2 A 6 m long hinged gate with a width of 1 m (into the paper) is at an angle of 60o and is held in place by a horizontal cable. Plot the tension in the cable, \\(T\\), as the water depth, \\(h\\), varies from 0.1 to 4 m in depth. Ignore the weight of the gate. Figure 3.5: Reservoir with hinged gate (Olivier Cleyne, CC0 license, via Wikimedia Commons) The surface area of the gate that is wetted is \\(A=L{\\cdot}w=\\frac{h{\\cdot}w}{\\sin(60)}\\). The wetted area is rectangular, so \\(h_c=\\frac{h}{2}\\). The magnitude of the force uses (3.3): \\[F={\\gamma}h_cA={\\gamma}\\frac{h}{2}\\frac{h{\\cdot}w}{\\sin(60)}\\] The distance along the plane from the water surface to the centroid of the wetted area is \\(y_c=\\frac{1}{2}\\frac{h}{\\sin(60)}\\). The moment of inertia for the rectangular wetted area is \\(I_c=\\frac{1}{12}w\\left(\\frac{h}{\\sin(60)}\\right)^3\\). Taking moments about the hinge at the bottom of the gate yields \\(T{\\cdot}6\\sin(60)-F{\\cdot}\\left(\\frac{h}{\\sin(60)}-y_p\\right)=0\\) or \\(T=\\frac{F}{6\\cdot\\sin(60)}\\left(\\frac{h}{\\sin(60)}-y_p\\right)\\) These equations can be used in R to create the desired plot. gate_length <- 6.0 w <- 1.0 theta <- 60*pi/180 #convert angle to radians h <- seq(from=0.1, to=4.1, by=0.25) gamma <- hydraulics::specwt(T = 20, units = 'SI') area <- h*w/sin(theta) hc <- h/2 Force <- gamma*hc*area yc <- (1/2)*h/(sin(theta)) Ic <- (1/12)*w*(h/sin(theta))^3 yp <- yc + (Ic/(yc*area)) Tension <- Force/(gate_length*sin(theta)) * (h/sin(theta) - yp) plot(Tension,h, xlab="Cable tension, N", ylab="Depth of water, m", type="l") grid() 3.3 Forces on curved surfaces For forces on curved surfaces, the procedure is often to calculate the vertical, \\(F_V\\), and horizontal, \\(F_H\\), hydrostatic forces separately. \\(F_H\\) is simpler, since it is the horizontal force on a (plane) vertical projection of the submerged surface, so the methods of Section 3.2 apply. The vertical component, \\(F_V\\), for a submerged surface with water above has a magnitude of the weight of the water above it, which acts through the center of volume. For a curved surface with water below it the magnitude of \\(F_V\\) is the volume of the ‘mising’ water that would be above it, and the force acts upward. Figure 3.6: Forces on curved surfaces, by Ertunc, CC BY-SA 4.0, via Wikimedia Commons A classic example of a curved surface in civil engineering hydraulics is a radial (or Tainter) gate, as in Figure 3.7. Figure 3.7: Radial gates on the Rogue River, OR. To simplify the geometry, a problem is presented in Example 3.3 where the gate meets the base at a horizontal angle. Example 3.3 A radial gate with radius R=6 m and a width of 1 m (into the paper) controls water. Find the horizontal and vertical hydrostatic forces for depths, \\(h\\), from 0 to 6 m. The horizontal hydrostatic force is that acting on a rectangle of height \\(h\\) and width \\(w\\): \\[F_H=\\frac{1}{2}{\\gamma}h^2w\\] which acts at a height of \\(y_c=\\frac{h}{3}\\) from the bottom of the gate. The vertical component has a magnitude equal to the weight of the ‘missing’ water indicated on the sketch. The calculation of its volume requires the area of a circular sector minus the area of a triangle above it. The angle, \\(\\theta\\) is found using geometry to be \\({\\theta}=cos^{-1}\\left(\\frac{R-h}{R}\\right)\\). Using the equations for areas of these two components as in Figure 3.4, the following is obtained: \\[F_V={\\gamma}w\\left(\\frac{R^2\\theta}{2}-\\frac{R-h}{2}R\\sin{\\theta}\\right)\\] The line of action of \\(F_V\\) can be determined by combining the components for centroids of the composite shapes, again following Figure 3.4. Because the line of action of the resultant force on a curcular gate must pass through the center of the circle (since hydrostatic forces always act normal to the gate), the moments about the hinge of \\(F_H\\) and \\(F_V\\) must equal zero. \\[\\sum{M}_{hinge}=0=F_H\\left(R-h/3\\right)-F_V{\\cdot}x_c\\] This produces the equation: \\[x_c=\\frac{F_H\\left(R-h/3\\right)}{F_V}\\] These equations can be solved in many ways, such as the following. R <- units::set_units(6.0, m) w <- units::set_units(1.0, m) gamma <- hydraulics::specwt(T = 20, units = 'SI', ret_units = TRUE) h <- units::set_units(seq(from=0, to=6, by=1), m) #angle in radians throughout, units not needed theta <- units::drop_units(acos((R-h)/R)) area <- h*w/sin(theta) Fh <- (1/2)*gamma*h^2*w yc <- h/3 Fv <- gamma*w*((R^2*theta)/2 - ((R-h)/2) * R*sin(theta)) xc <- Fh*(R-h/3)/Fv Ftotal <- sqrt(Fh^2+Fv^2) tibble::tibble(h=h, Fh=Fh, yc=yc, Fv=Fv, xc=xc, Ftotal=Ftotal) #> # A tibble: 7 × 6 #> h Fh yc Fv xc Ftotal #> [m] [N] [m] [N] [m] [N] #> 1 0 0 0 0 NaN 0 #> 2 1 4896. 0.333 22041. 1.26 22578. #> 3 2 19585. 0.667 60665. 1.72 63748. #> 4 3 44067. 1 108261. 2.04 116886. #> 5 4 78341. 1.33 161583. 2.26 179573. #> 6 5 122408. 1.67 218398. 2.43 250363. #> 7 6 176268. 2 276881. 2.55 328228. "],["water-flowing-in-pipes-energy-losses.html", "Chapter 4 Water flowing in pipes: energy losses 4.1 Important dimensionless quantity 4.2 Friction Loss in Circular Pipes 4.3 Solving Pipe friction problems 4.4 Solving for head loss (Type 1 problems) 4.5 Solving for Flow or Velocity (Type 2 problems) 4.6 Solving for pipe diameter, D (Type 3 problems) 4.7 Parallel pipes: solving a system of equations 4.8 Simple pipe networks: the Hardy-Cross method", " Chapter 4 Water flowing in pipes: energy losses Flow in civil engineering infrastructure is usually either in pipes, where it is not exposed to the atmosphere and flows under pressure, or open channels (canals, rivers, etc.). this chapter is concerned only with water flow in pipes. Once water begins to move engineering problems often need to relate the flow rate to the energy dissipated. To accomplish this, the flow needs to be classified using dimensionless constants since energy dissipation varies with the flow conditions. 4.1 Important dimensionless quantity As water begins to move, the characteristics are described by two quantities in engineering hydraulics: the Reynolds number, Re and the Froude number Fr. The latter is more important for open channel flow and will be discussed in that chapter. Reynolds number describes the turbulence of the flow, defined by the ratio of kinematic forces, expressed by velocity V and a characteristic length such as pipe diameter, D, to viscous forces as expressed by the kinematic viscosity \\(\\nu\\), as in Equation (4.1) \\[\\begin{equation} Re=\\frac{VD}{\\nu} \\tag{4.1} \\end{equation}\\] For open channels the characteristic length is the hydraulic depth, the area of flow divided by the top width. For adequately turbulent conditions to exists, Reynolds numbers should exceed 4000 for full pipes, and 2000 for open channels. 4.2 Friction Loss in Circular Pipes The energy at any point along a pipe containing flowing water is often described by the energy per unit weight, or energy head, E, as in Equation (4.2) \\[\\begin{equation} E = z+\\frac{P}{\\gamma}+\\alpha\\frac{V^2}{2g} \\tag{4.2} \\end{equation}\\] where P is the pressure, \\(\\gamma=\\rho g\\) is the specific weight of water, z is the elevation of the point, V is the average velocity, and each term has units of length. \\(\\alpha\\) is a kinetic energy adjustment factor to account for non-uniform velocity distribution across the cross-section. \\(\\alpha\\) is typically assumed to be 1.0 for turbulent flow in circular pipes because the value is close to 1.0 and \\(\\frac{V^2}{2g}\\) (the velocity head) tends to be small in relation to other terms in the equation. Some applications where velocity varies widely across a cross-section, such as a river channel with flow in a main channel and a flood plain, will need to account for values of \\(\\alpha\\) other than one. As water flows through a pipe energy is lost due to friction with the pipe walls and local disturbances (minor losses). The energy loss between two sections is expressed as \\({E_1} - {h_l} = {E_2}\\). When pipes are long, with \\(\\frac{L}{D}>1000\\), friction losses dominate the energy loss on the system, and the head loss, \\(h_l\\), is calculated as the head loss due to friction, \\(h_f\\). This energy head loss due to friction with the walls of the pipe is described by the Darcy-Weisbach equation, which estimates the energy loss per unit weight, or head loss \\({h_f}\\), which has units of length. For circular pipes it is expressed by Equation (4.3) \\[\\begin{equation} h_f = \\frac{fL}{D}\\frac{V^2}{2g} = \\frac{8fL}{\\pi^{2}gD^{5}}Q^{2} \\tag{4.3} \\end{equation}\\] In equation (4.3) f is the friction factor, typically calculated with the Colebrook equation (Equation (4.4)). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{2.51}{Re\\sqrt{f}}\\right) \\tag{4.4} \\end{equation}\\] In Equation (4.4) \\(k_s\\) is the absolute roughness of the pipe wall. There are close approximations to the Colebrook equation that have an explicit form to facilitate hand-calculations, but when using R or other computational tools there is no need to use approximations. 4.3 Solving Pipe friction problems As water flows through a pipe energy is lost due to friction with the pipe walls and local disturbances (minor losses). For this example assume minor losses are negligible. The energy head loss due to friction with the walls of the pipe is described by the Darcy-Weisbach equation (Equation ((4.3))), which estimates the energy loss per unit weight, or head loss hf, which has units of length. The Colebrook equation (Equation (4.4)) is commonly plotted as a Moody diagram to illustrate the relationships between the variables, in Figure 4.1. hydraulics::moody() Figure 4.1: Moody Diagram Because of the form of the equations, they can sometimes be a challenge to solve, especially by hand. It can help to classify the types of problems based on what variable is unknown. These are summarized in Table 4.1. Table 4.1: Types of Energy Loss Problems in Pipe Flow Type Known Unknown 1 Q (or V), D, ks, L hL 2 hL, D, ks, L Q (or V) 3 hL, Q (or V), ks, L D When solving by hand the types in Table 4.1 become progressively more difficult, but when using solvers the difference in complexity is subtle. 4.4 Solving for head loss (Type 1 problems) The simplest pipe flow problem to solve is when the unknown is head loss, hf (equivalent to hL in the absence of minor losses), since all variables on the right side of the Darcy-Weisbach equation are known, except f. 4.4.1 Solving for head loss by manual iteration While all unknowns are on the right side of Equation (4.3), iteration is still required because the Colebrook equation, Equation (4.4), cannot be solved explicitly for f. An illustration of solving this type of problem is shown in Example 4.1. Example 4.1 Find the head loss (due to friction) of 20\\(^\\circ\\)C water in a pipe with the following characteristics: Q=0.416 m\\(^3\\)/s, L=100m, D=0.5m, ks=0.046mm. Since the water temperature is known, first find the kinematic viscocity of water, \\({\\nu}\\), since it is needed for the Reynolds number. This can be obtained from a table in a reference or using software. Here we will use the hydraulics R package. nu <- hydraulics::kvisc(T=20, units="SI") cat(sprintf("Kinematic viscosity = %.3e m2/s\\n", nu)) #> Kinematic viscosity = 1.023e-06 m2/s We will need the Reynolds Number to use the Colebrook equation, and that can be calculated since Q is known. This can be accomplished with a calculator, or using other software (R is used here): Q <- 0.416 D <- 0.5 A <- (3.14/4)*D^2 V <- Q/A Re <- V*D/nu cat(sprintf("Velocity = %.3f m/s, Re = %.3e\\n", V, Re)) #> Velocity = 2.120 m/s, Re = 1.036e+06 Now the only unknown in the Colebrook equation is f, but unfortunately f appears on both sides of the equation. To begin the iterative process, a first guess at f is needed. A reasonable value to use is the minimum f value, fmin, given the known \\(\\frac{k_s}{D}=\\frac{0.046}{500}=0.000092=9.2\\cdot 10^{-5}\\). Reading horizontally from the right vertical axis to the left on the Moody diagram provides a value for \\(f_{min}\\approx 0.012\\). Numerically, it can be seen that f is independent of Re for large values of Re. When Re is large the second term of the Colebrook equation becomes small and the equation approaches Equation (4.5). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7}\\right) \\tag{4.5} \\end{equation}\\] This independence of f with varying Re values is visible in the Moody Diagram, Figure 4.1, toward the right, where the lines become horizontal. Using Equation (4.5) the same value of fmin=0.012 is obtained, since the colebrook equation defines the Moody diagram. Iteration 1: Using f=0.012 the right side of the Colebrook equation is 8.656. the next estimate for f is then obtained by \\(\\frac{1}{\\sqrt{f}}=8.656\\) so f=0.0133. Iteration 2: Using the new value of f=0.0133 in the right side of the Colebrook equation produces 8.677. A new value for f is obtained by \\(\\frac{1}{\\sqrt{f}}=8.677\\) so f=0.0133. The solution has converged! Using the new value of f, the value for hf is calculated: \\[h_f = \\frac{8fL}{\\pi^{2}gD^{5}}Q^{2}=\\frac{8(0.0133)(100)}{\\pi^{2}(9.81)(0.5)^{5}}(0.416)^{2}=0.061 m\\] 4.4.2 Solving for headloss using an empirical approximation A shortcut that can be used to avoid iterating to find the friction factor is to use an approximation to the Colebrook equation that can be solved explicitly. One example is the Haaland equation (4.6) (Haaland, 1983). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -1.8\\log\\left(\\left(\\frac{\\frac{k_s}{D}}{3.7}\\right)^{1.11}+\\frac{6.9}{Re}\\right) \\tag{4.6} \\end{equation}\\] For ordinary pipe flow conditions in water pipes, Equation (4.6) is accurate to within 1.5% of the Colebrook equation. There are many other empirical equations, one common one being that of Swamee and Jain (Swamee & Jain, 1976), shown in Equation (4.7). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7}+\\frac{5.74}{Re^{0.9}}\\right) \\tag{4.7} \\end{equation}\\] These approximations are useful for solving problems by hand or in spreadsheets, and their accuracy is generally within the uncertainty of other input variables like the absolute roughness. 4.4.3 Solving for head loss using an equation solver Rather than use an empirical approximation (as in Section 4.4.2) to the Colebrook equation, it is straightforward to apply an equation solver to use the Colebrook equation directly. This is demonstrated in Example 4.2. Example 4.2 Find the friction factor for the same conditions as Example 4.1: D=0.5m, ks=0.046mm, and Re=1.036e+06. First, rearrange the Colebrook equation so all terms are on one side of the equation, as in Equation (4.8). \\[\\begin{equation} -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{2.51}{Re\\sqrt{f}}\\right) - \\frac{1}{\\sqrt{f}}=0 \\tag{4.8} \\end{equation}\\] Create a function using whatever equation solving platform you prefer. Here the R software is used: colebrk <- function(f,ks,D,Re) -2.0*log10((ks/D)/3.7 + 2.51/(Re*(f^0.5)))-1/(f^0.5) Find the root of the function (where it equals zero), specifying a reasonable range for f values using the interval argument: f <- uniroot(colebrk, interval = c(0.008,0.1), ks=0.000046, D=0.5, Re=1.036e+06)$root cat(sprintf("f = %.4f\\n", f)) #> f = 0.0133 The same value for hf as above results. 4.4.4 Solving for head loss using an R package Equation solvers for implicit equations, like in Section 4.4.3, are built into the R package hydraulics. that can be applied directly, without writing a separate function. Example 4.3 Using the hydraulics R package, find the friction factor and head loss for the same conditions as Example 4.2: Q=0.416 m3/s, L=100 m, D=0.5m, ks=0.046mm, and nu = 1.023053e-06 m2/s. ans <- hydraulics::darcyweisbach(Q = 0.416,D = 0.5, L = 100, ks = 0.000046, nu = 1.023053e-06, units = c("SI")) #> hf missing: solving a Type 1 problem cat(sprintf("Reynolds no: %.0f\\nFriction Fact: %.4f\\nHead Loss: %.2f ft\\n", ans$Re, ans$f, ans$hf)) #> Reynolds no: 1035465 #> Friction Fact: 0.0133 #> Head Loss: 0.61 ft If only the f value is needed, the colebrook function can be used. f <- hydraulics::colebrook(ks=0.000046, V= 2.120, D=0.5, nu=1.023e-06) cat(sprintf("f = %.4f\\n", f)) #> f = 0.0133 Notice that the colebrook function needs input in dimensionally consistent units. Because it is dimensionally homogeneous and the input dimensions are consistent, the unit system does not need to be defined like with many other functions in the hydraulics package. 4.5 Solving for Flow or Velocity (Type 2 problems) When flow (Q) or velocity (V) is unknown, the Reynolds number cannot be determined, complicating the solution of the Colebrook equation. As with Secion 4.4 there are several strategies to solving these, ranging from iterative manual calculations to using software packages. For Type 2 problems, since D is known, once either V or Q is known, the other is known, since \\(Q=V{\\cdot}A=V\\frac{\\pi}{4}D^2\\). 4.5.1 Solving for Q (or V) using manual iteration Solving a Type 2 problem can be done with manual iterations, as demonstrated in Example 4.4. Example 4.4 find the flow rate, Q of 20oC water in a pipe with the following characteristics: hf=0.6m, L=100m, D=0.5m, ks=0.046mm. First rearrange the Darcy-Weisbach equation to express V as a function of f, substituting all of the known quantities: \\[V = \\sqrt{\\frac{h_f}{L}\\frac{2gD}{f}}=\\frac{0.243}{\\sqrt{f}}\\] That provides one equation relating V and f. The second equation relating V and f is one of the friction factor equations, such as the Colebrook equation or its graphic representation in the Moody diagram. An initial guess at a value for f is obtained using fmin=0.012 as was done in Example 4.1. Iteration 1: \\(V=\\frac{0.243}{\\sqrt{0.012}}=2.218\\); \\(Re=\\frac{2.218\\cdot 0.5}{1.023e-06}=1.084 \\cdot 10^6\\). A new f value is obtained from the Moody diagram or an equation using the new Re value: \\(f \\approx 0.0131\\) Iteration 2: \\(V=\\frac{0.243}{\\sqrt{0.0131}}=2.123\\); \\(Re=\\frac{2.123\\cdot 0.5}{1.023e-06}=1.038 \\cdot 10^6\\). A new f estimate: \\(f \\approx 0.0132\\) The function converges very quickly if a reasonable first guess is made. Using V=2.12 m/s, \\(Q = AV = \\left(\\frac{\\pi}{4}\\right)D^2V=0.416 m^3/s\\) 4.5.2 Solving for Q Using an Explicit Equation Solving Type 2 problems using iteration is not necessary, since an explicit equation based on the Colebrook equation can be derived. Solving the Darcy Weisbach equation for \\(\\frac{1}{\\sqrt{f}}\\) and substituting that into the Colebrook equation produces Equation (4.9). \\[\\begin{equation} Q=-2.221D^2\\sqrt{\\frac{gDh_f}{L}} \\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{1.784\\nu}{D}\\sqrt{\\frac{L}{gDh_f}}\\right) \\tag{4.9} \\end{equation}\\] This can be solved explicitly for Q=0.413 m3/s. 4.5.3 Solving for Q Using an R package Using software to solve the problem allows the use of the Colebrook equation in a straightforward format. The hydraulics package in R is applied to the same problem as above. ans <- hydraulics::darcyweisbach(D=0.5, hf=0.6, L=100, ks=0.000046, nu=1.023e-06, units = c('SI')) knitr::kable(format(as.data.frame(ans), digits = 3), format = "pipe") Q V L D hf f ks Re 0.406 2.07 100 0.5 0.6 0.0133 4.6e-05 1010392 The answer differs from the manual iteration by just over 2%, showing remarkable consistency. 4.6 Solving for pipe diameter, D (Type 3 problems) When D is unknown, neither Re nor relative roughness \\(\\frac{ks}{D}\\) are known. Referring to the Moody diagram, Figure 4.1, the difficulty in estimating a value for f (on the left axis) is evident since the positions on either the right axis (\\(\\frac{ks}{D}\\)) or x-axis (Re) are known. 4.6.1 Solving for D using manual iterations Solving for D using manual iterations is done by first rearranging Equation (4.9) to allow it to be solved for zero, as in Equation (4.10). \\[\\begin{equation} -2.221D^2\\sqrt{\\frac{gDh_f}{L}} \\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{1.784\\nu}{D}\\sqrt{\\frac{L}{gDh_f}}\\right)-Q=0 \\tag{4.10} \\end{equation}\\] Using this with manual iterations is demonstrated in Example 4.5. Example 4.5 For a similar problem to 4.4 use Q=0.416m3/s and solve for the required pipe diameter, D. This can be solved manually by guessing values and repeating the calculation in a spreadsheet or with a tool like R. Iteration 1: Guess an arbitrary value of D=0.3m. Solve the left side of Equation (4.10) to obtain a value of -0.31 Iteration 2: Guess another value for D=1.0m. The left side of Equation (4.10) produces a value for the function of 2.11 The root, when the function equals zero, lies between the two values, so the correct D is between 0.3 and 1.0. Repeated values can home in on a solution. Plotting the results from many trials can help guide toward the solution. The root is seen to lie very close to D=0.5 m. Repeated trials can home in on the result. 4.6.2 Solving for D using an equation solver An equation solver automatically accomplishes the manual steps of the prior demonstration. The equation from 1.6 can be written as a function that can then be solved for the root, again using R software for the demonstration: q_fcn <- function(D, Q, hf, L, ks, nu, g) { -2.221 * D^2 * sqrt(( g * D * hf)/L) * log10((ks/D)/3.7 + (1.784 * nu/D) * sqrt(L/(g * D * hf))) - Q } The uniroot function can solve the equation in R (or use a comparable approach in other software) for a reasonable range of D values ans <- uniroot(q_fcn, interval=c(0.01,4.0),Q=0.416, hf=0.6, L=100, ks=0.000046, nu=1.023053e-06, g=9.81)$root cat(sprintf("D = %.3f m\\n", ans)) #> D = 0.501 m 4.6.3 Solving for D using an R package The hydraulics R package implements an equation solving technique like that above to allow the direct solution of Type 3 problems. the prior example is solved using that package as shown beliow. ans <- hydraulics::darcyweisbach(Q=0.416, hf=0.6, L=100, ks=0.000046, nu=1.023e-06, ret_units = TRUE, units = c('SI')) knitr::kable(format(as.data.frame(ans), digits = 3), format = "pipe") ans Q 0.416 [m^3/s] V 2.11 [m/s] L 100 [m] D 0.501 [m] hf 0.6 [m] f 0.0133 [1] ks 4.6e-05 [m] Re 1032785 [1] 4.7 Parallel pipes: solving a system of equations In the examples above the challenge was often to solve a single implicit equation. The manual iteration approach can work to solve two equations, but as the number of equations increases, especially when using implicit equations, using an equation solver is needed. For the case of a simple pipe loop manual iterations are impractical. for this reason often fixed values of f are assumed, or an empirical energy loss equation is used. However, a single loop, identical to a parallel pipe problem, can be used to demonstrate how systems of equations can be solved simultaneously for systems of pipes. Example 4.6 demonstrates the process of assembling the equations for a solver for a parallel pipe problem. Example 4.6 Two pipes carry a flow of Q=0.5 m3/s, as depicted in Figure 4.2 Figure 4.2: Parallel Pipe Example The fundamental equations needed are the Darcy-Weisbach equation, the Colebrook equation, and continuity (conservation of mass). For the illustrated system, this means: The flows through each pipe must add to the total flow The head loss through Pipe 1 must equal that of Pipe 2 This could be set up as a system of anywhere from 2 to 10 equations to solve simultaneously. In this example four equations are used: \\[\\begin{equation} Q_1+Q_2-Q_{total}=V_1\\frac{\\pi}{4}D_1^2+V_2\\frac{\\pi}{4}D_2^2-0.5m^3/s=0 \\tag{4.11} \\end{equation}\\] and \\[\\begin{equation} Qh_{f1}-h_{f2} = \\frac{f_1L_1}{D_1}\\frac{V_1^2}{2g} -\\frac{f_2L_2}{D_2}\\frac{V_2^2}{2g}=0 \\tag{4.12} \\end{equation}\\] The other two equations are the Colebrook equation (4.8) for solving for the friction factor for each pipe. These four equations can be solved simultaneously using an equation solver, such as the fsolve function in the R package pracma. #assign known inputs - SI units Qsum <- 0.5 D1 <- 0.2 D2 <- 0.3 L1 <- 400 L2 <- 600 ks <- 0.000025 g <- 9.81 nu <- hydraulics::kvisc(T=100, units='SI') #Set up the function that sets up 4 unknowns (x) and 4 equations (y) F_trial <- function(x) { V1 <- x[1] V2 <- x[2] f1 <- x[3] f2 <- x[4] Re1 <- V1*D1/nu Re2 <- V2*D2/nu y <- numeric(length(x)) #Continuity - flows in each branch must add to total y[1] <- V1*pi/4*D1^2 + V2*pi/4*D2^2 - Qsum #Darcy-Weisbach equation for head loss - must be equal in each branch y[2] <- f1*L1*V1^2/(D1*2*g) - f2*L2*V2^2/(D2*2*g) #Colebrook equation for friction factors y[3] <- -2.0*log10((ks/D1)/3.7 + 2.51/(Re1*(f1^0.5)))-1/(f1^0.5) y[4] <- -2.0*log10((ks/D2)/3.7 + 2.51/(Re2*(f2^0.5)))-1/(f2^0.5) return(y) } #provide initial guesses for unknowns and run the fsolve command xstart <- c(2.0, 2.0, 0.01, 0.01) z <- pracma::fsolve(F_trial, xstart) #prepare some results to print Q1 <- z$x[1]*pi/4*D1^2 Q2 <- z$x[2]*pi/4*D2^2 hf1 <- z$x[3]*L1*z$x[1]^2/(D1*2*g) hf2 <- z$x[4]*L2*z$x[2]^2/(D2*2*g) cat(sprintf("Q1=%.2f, Q2=%.2f, V1=%.1f, V2=%.1f, hf1=%.1f, hf2=%.1f, f1=%.3f, f2=%.3f\\n", Q1,Q2,z$x[1],z$x[2],hf1,hf2,z$x[3],z$x[4])) #> Q1=0.15, Q2=0.35, V1=4.8, V2=5.0, hf1=30.0, hf2=30.0, f1=0.013, f2=0.012 If the fsolve command fails, a simple solution is sometimes to revise your initial guesses and try again. There are other solvers in R and every other scripting language that can be similarly implemented. If the simplification were applied for fixed f values, then Equations (4.11) and (4.12) can be solved simultaneously for V1 and V2. 4.8 Simple pipe networks: the Hardy-Cross method For water pipe networks containing multiple loops, manually setting up systems of equations is impractical. In addition, hand calculations always assume fixed f values or use an empirical friction loss equation to simplify calculations. A typical method to solve for the flow in each pipe segment in a small network uses the Hardy-Cross method. This consists of setting up an initial guess of flow (magnitude and direction) for each pipe segment, ensuring conservation of mass is preserved at each node (or vertex) in the network. Then calculations are performed for each loop, ensuring energy is conserved. When using the Darcy-Weisbach equation, Equation (4.3), for friction loss, the head loss in each pipe segment is usually expressed in a condensed form as \\({h_f = KQ^{2}}\\) where K is defined as in Equation (4.13). \\[\\begin{equation} K = \\frac{8fL}{\\pi^{2}gD^{5}} \\tag{4.13} \\end{equation}\\] When doing calculations by hand fixed f values are assumed, but when using a computational tool like R any of the methods for estimating f and hf may be applied. The Hardy-Cross method begins by assuming flows in each segment of a loop. These initial flows are then adjusted in a series of iterations. The flow adjustment in each loop is calculated at each iteration in Equation Equation (4.14). \\[\\begin{equation} \\Delta{Q_i} = -\\frac{\\sum_{j=1}^{p_i} K_{ij}Q_j|Q_j|}{\\sum_{j=1}^{p_i} 2K_{ij}Q_j^2} \\tag{4.14} \\end{equation}\\] For calculations for small systems with two or three loops can be done manually with fixed f and K values. Using the hydraulics R package to solve a small pipe network is demonstrated in Example 4.7. Example 4.7 Find the flows in each pipe in teh system shown in Figure 4.3. Input consists of pipe characteristics, pipe order and initial flows for each loop, as shown non the diagram. Figure 4.3: A sample pipe network with pipe numbers indicated in black Input for this system, assuming fixed f values, would look like the following. (If fixed K values are provided, f, L and D are not needed). These f values were estimated using \\(ks=0.00025 m\\) in the form of the Colebrook equation for fully rough flows, Equation (4.5). dfpipes <- data.frame( ID = c(1,2,3,4,5,6,7,8,9,10), #pipe ID D = c(0.3,0.2,0.2,0.2,0.2,0.15,0.25,0.15,0.15,0.25), #diameter in m L = c(250,100,125,125,100,100,125,100,100,125), #length in m f = c(.01879,.02075,.02075,.02075,.02075,.02233,.01964,.02233,.02233,.01964) ) loops <- list(c(1,2,3,4,5),c(4,6,7,8),c(3,9,10,6)) Qs <- list(c(.040,.040,.02,-.02,-.04),c(.02,0,0,-.02),c(-.02,.02,0,0)) Running the hardycross function and looking at the output after three iterations (defined by n_iter): ans <- hydraulics::hardycross(dfpipes = dfpipes, loops = loops, Qs = Qs, n_iter = 3, units = "SI") knitr::kable(ans$dfloops, digits = 4, format = "pipe", padding=0) loop pipe flow 1 1 0.0383 1 2 0.0383 1 3 0.0232 1 4 -0.0258 1 5 -0.0417 2 4 0.0258 2 6 0.0090 2 7 0.0041 2 8 -0.0159 3 3 -0.0232 3 9 0.0151 3 10 -0.0049 3 6 -0.0090 The output pipe data frame has added columns, including the flow (where direction is that for the first loop containing the segment). knitr::kable(ans$dfpipes, digits = 4, format = "pipe", padding=0) ID D L f Q K 1 0.30 250 0.0188 0.0383 159.7828 2 0.20 100 0.0208 0.0383 535.9666 3 0.20 125 0.0208 0.0232 669.9582 4 0.20 125 0.0208 -0.0258 669.9582 5 0.20 100 0.0208 -0.0417 535.9666 6 0.15 100 0.0223 0.0090 2430.5356 7 0.25 125 0.0196 0.0041 207.7883 8 0.15 100 0.0223 -0.0159 2430.5356 9 0.15 100 0.0223 0.0151 2430.5356 10 0.25 125 0.0196 -0.0049 207.7883 While the Hardy-Cross method is often used with fixed f (or K) values when it is used in exercises performed by hand, the use of the Colebrook equation allows friction losses to vary with Reynolds number. To use this approach the input data must include absolute roughness. Example values are included here: dfpipes <- data.frame( ID = c(1,2,3,4,5,6,7,8,9,10), #pipe ID D = c(0.3,0.2,0.2,0.2,0.2,0.15,0.25,0.15,0.15,0.25), #diameter in m L = c(250,100,125,125,100,100,125,100,100,125), #length in m ks = rep(0.00025,10) #absolute roughness, m ) loops <- list(c(1,2,3,4,5),c(4,6,7,8),c(3,9,10,6)) Qs <- list(c(.040,.040,.02,-.02,-.04),c(.02,0,0,-.02),c(-.02,.02,0,0)) The effect of allowing the calculation of f to be (correctly) dependent on velocity (via the Reynolds number) can be seen, though the effect on final flow values is small. ans <- hydraulics::hardycross(dfpipes = dfpipes, loops = loops, Qs = Qs, n_iter = 3, units = "SI") knitr::kable(ans$dfpipes, digits = 4, format = "pipe", padding=0) ID D L ks Q f K 1 0.30 250 3e-04 0.0382 0.0207 176.1877 2 0.20 100 3e-04 0.0382 0.0218 562.9732 3 0.20 125 3e-04 0.0230 0.0224 723.1119 4 0.20 125 3e-04 -0.0258 0.0222 718.1439 5 0.20 100 3e-04 -0.0418 0.0217 560.8321 6 0.15 100 3e-04 0.0088 0.0248 2700.4710 7 0.25 125 3e-04 0.0040 0.0280 296.3990 8 0.15 100 3e-04 -0.0160 0.0238 2590.2795 9 0.15 100 3e-04 0.0152 0.0239 2598.5553 10 0.25 125 3e-04 -0.0048 0.0270 285.4983 "],["flow-in-open-channels.html", "Chapter 5 Flow in open channels 5.1 An important dimensionless quantity 5.2 Equations for open channel flow 5.3 Trapezoidal channels 5.4 Circular Channels (flowing partially full) 5.5 Critical flow 5.6 Flow in Rectangular Channels 5.7 Gradually varied steady flow 5.8 Rapidly varied flow (the hydraulic jump)", " Chapter 5 Flow in open channels Where flowing water water is exposed to the atmosphere, and thus not under pressure, its condition is called open channel flow. Typical design challenges can be: Determining how deep water will flow in a channel Finding the bottom slope required to carry a defined flow in a channel Comparing different cross-sectional shapes and dimensions to carry flow In pipe flow the cross-sectional area does not change with flow rate, which simplifies some aspects of calculations. By contrast, in open channel flow conditions including flow depth, area, and roughness can all vary with flow rate, which tends to make the equations more cumbersome. In civil engineering applications, roughness characteristics are not usually considered as variable with flow rate. In what follows, three conditions for flow are considered: Uniform flow, where flow characteristics do not vary along the length of a channel Gradually varied flow, where flow responds to an obstruction or change in channel conditions with a gradual adjustment in flow depth Rapidly varied flow, where an abrupt channel transition results in a rapid change in water surface, the most important case of which is the hydraulic jump 5.1 An important dimensionless quantity For open channel flow, given a channel shape and flow rate, flow can usually exist at two different depths, termed subcritical (slow, deep) and supercritical (shallow, fast). The exception is at critical flow conditions, where only one depth exists, the critical depth. Which of these depths is exhibited by the flow is determined by the slope and roughness of the channel. The Froude number characterizes whether flow is critical, supercritical or subcritical, and is defined by Equation (5.1) \\[\\begin{equation} Fr=\\frac{V}{\\sqrt{gD}} \\tag{5.1} \\end{equation}\\] The Froude number characterizes flow as: Fr Condition Description <1.0 subcritical slow, deep =1.0 critical undulating, transitional >1.0 supercritical fast, shallow Critical flow is important in open-channel flow applications and is discussed further below. 5.2 Equations for open channel flow Flow conditions in an open channel under uniform flow conditions are often related by the Manning equation (5.2). \\[\\begin{equation} Q=A\\frac{C}{n}{R}^{\\frac{2}{3}}{S}^{\\frac{1}{2}} \\tag{5.2} \\end{equation}\\] In Equation (5.2), C is 1.0 for SI units and 1.49 for Eng (British Gravitational, English., or U.S. Customary) units. Q is the flow rate, A is the cross-sectional flow area, n is the Manning roughness coefficient, S is the longitudinal channel slope, and R is the hydraulic radius, defined by equation (5.3) \\[\\begin{equation} R=\\frac{A}{P} \\tag{5.3} \\end{equation}\\] where P is the wetted perimeter. Critical depth is defined by the relation (at critical conditions) in Equation (5.4) \\[\\begin{equation} \\frac{Q^{2}B}{g\\,A^{3}}=1 \\tag{5.4} \\end{equation}\\] where B is the width of the water surface (top width). Because of the channel geometry being included in A and R, it helps to work with specific shapes in adapting these equations. The two most common are trapezoidal and circular, included in Sections 5.3 and 5.4 below. As with pipe flow, the energy equation applies for one dimensional open channel flow as well, Equation (5.5): \\[\\begin{equation} \\frac{V_1^2}{2g}+y_1+z_1=\\frac{V_2^2}{2g}+y_2+z_2+h_L \\tag{5.5} \\end{equation}\\] where point 1 is upstream of point 2, V is the flow velocity, y is the flow depth, and z is the elevation of the channel bottom. \\(h_L\\) is the energy head loss from point 1 to point 2. For uniform flow, \\(h_L\\) is the drop in elevation between the two points due to the channel slope. 5.3 Trapezoidal channels In engineering applications one of the most common channel shapes is trapezoidal. Figure 5.1: Typical symmetrical trapezoidal cross section The geometrical relationships for a trapezoid are: \\[\\begin{equation} A=(b+my)y \\tag{5.6} \\end{equation}\\] \\[\\begin{equation} P=b+2y\\sqrt{1+m^2} \\tag{5.7} \\end{equation}\\] Combining Equations (5.6) and (5.7) yields: \\[\\begin{equation} R=\\frac{A}{P}=\\frac{\\left(b+my\\right)y}{b+2y\\sqrt{1+m^2}} \\tag{5.8} \\end{equation}\\] Top width: \\(B=b+2\\,m\\,y\\). Substituting Equations (5.6) and (5.8) into the Manning equation produces Equation (5.9). \\[\\begin{equation} Q=\\frac{C}{n}{\\frac{\\left(by+my^2\\right)^{\\frac{5}{3}}}{\\left(b+2y\\sqrt{1+m^2}\\right)^\\frac{2}{3}}}{S}^{\\frac{1}{2}} \\tag{5.9} \\end{equation}\\] 5.3.1 Solving the Manning equation in R To solve Equation (5.9) when any variable other than Q is unknown, it is straightforward to rearrange it to a form of y(x) = 0. \\[\\begin{equation} Q-\\frac{C}{n}{\\frac{\\left(by+my^2\\right)^{\\frac{5}{3}}}{\\left(b+2y\\sqrt{1+m^2}\\right)^\\frac{2}{3}}}{S}^{\\frac{1}{2}}=0 \\tag{5.10} \\end{equation}\\] This allows the use of a standard solver to find the root(s). If solving it by hand, trial and error can be employed as well. Example 5.1 demonstrates the solution of Equation (5.10) for the flow depth, y. A trial-and-error approach can be applied, and with careful selection of guesses a solution can be obtained relatively quickly. Using solvers makes the process much quicker and less prone to error. Example 5.1 Find the flow depth, y, for a trapezoidal channel with Q=225 ft3/s, n=0.016, m=2, b=10 ft, S=0.0006. The Manning equation can be set up as a function in terms of a missing variable, here using normal depth, y as the missing variable. yfun <- function(y) { Q - (((y * (b + m * y)) ^ (5 / 3) * sqrt(S)) * (C / n) / ((b + 2 * y * sqrt(1 + m ^ 2)) ^ (2 / 3))) } Because these use US Customary (or English) units, C=1.486. Define all of the needed input variables for the function. Q <- 225. n <- 0.016 m <- 2 b <- 10.0 S <- 0.0006 C <- 1.486 Use the R function uniroot to find a single root within a defined interval. Set the interval (the range of possible y values in which to search for a root) to cover all plausible values, here from 0.0001 mm to 200 m. ans <- uniroot(yfun, interval = c(0.0000001, 200), extendInt = "yes") cat(sprintf("Normal Depth: %.3f ft\\n", ans$root)) #> Normal Depth: 3.406 ft Functions can usually be given multiple values as input, returning the corresponding values of output. this allows plots to be created to show, for example, how the left side of Equation (5.10) varies with different values of depth, y. ys <- seq(0.1, 5, 0.1) plot(ys,yfun(ys), type='l', xlab = "y, ft", ylab = "Function to solve for zero") abline(h=0) grid() Figure 5.2: Variation of the left side of Equation (5.10) with y for Example 5.1. This validates the result in the example, showing the root of Equation (5.10), when the function has a value of 0, occurs for a depth, y of a little less than 3.5. 5.3.2 Solving the Manning equation with the hydraulics R package The hydraulics package has a manningt (the ‘t’ is for ‘trapezoid’) function for trapezoidal channels. Example 5.2 demonstrates its usage. Example 5.2 Find the uniform (normal) flow depth, y, for a trapezoidal channel with Q=225 ft3/s, n=0.016, m=2, b=10 ft, S=0.0006. Specifying “Eng” units ensures the correct C value is used. Sf is the same as S in Equations (5.2) and (5.9) since flow is uniform. ans <- hydraulics::manningt(Q = 225., n = 0.016, m = 2, b = 10., Sf = 0.0006, units = "Eng") cat(sprintf("Normal Depth: %.3f ft\\n", ans$y)) #> Normal Depth: 3.406 ft #critical depth is also returned, along with other variables. cat(sprintf("Critical Depth: %.3f ft\\n", ans$yc)) #> Critical Depth: 2.154 ft 5.3.3 Solving the Manning equation using a spreadsheet like Excel Spreadsheet software is very popular and has been modified to be able to accomplish many technical tasks such as solving equations. This example uses Excel with its solver add-in activated, though other spreadsheet software has similar solver add-ins that can be used. The first step is to enter the input data, for the same example as above, along with an initial guess for the variable you wish to solve for. The equation for which a root will be determined is typed in using the initial guess for y in this case. At this point you could use a trial-and-error approach and simply try different values for y until the equation produces something close to 0. A more efficient method is to use a solver. Check that the solver add-in is activated (in Options) and open it. Set the values appropriately. Click Solve and the y value that produces a zero for the equation will appear. If you need to solve for multiple roots, you will need to start from different initial guesses. 5.3.4 Optimal trapezoidal geometry Most fluid mechanics texts that include open channel flow include a derivation of optimal geometry for a trapezoidal channel. This is also called the most efficient cross section. What this means is for a given A and m, there is an optimal flow depth and bottom width for the channel, defined by Equations (5.11) and (5.12). \\[\\begin{equation} b_{opt}=2y\\left(\\sqrt{1+m^2}-m\\right) \\tag{5.11} \\end{equation}\\] \\[\\begin{equation} y_{opt}=\\sqrt{\\frac{A}{2\\sqrt{1+m^2}-m}} \\tag{5.12} \\end{equation}\\] These may be calculated manually, but they are also returned by the manningt function of the hydraulics package in R. Example 5.3 demonstrates this. Example 5.3 Find the optimal channel width to transmit 360 ft3/s at a depth of 3 ft with n=0.015, m=1, S=0.0006. ans <- hydraulics::manningt(Q = 360., n = 0.015, m = 1, y = 3.0, Sf = 0.00088, units = "Eng") knitr::kable(format(as.data.frame(ans), digits = 2), format = "pipe", padding=0) Q V A P R y b m Sf B n yc Fr Re bopt 360 5.3 68 28 2.4 3 20 1 0.00088 26 0.015 2.1 0.57 1159705 4.8 cat(sprintf("Optimal bottom width: %.5f ft\\n", ans$bopt)) #> Optimal bottom width: 4.76753 ft The results show that, aside from the rounding, the required width is approximately 20 ft, and the optimal bottom width for optimal hydraulic efficiency would be 4.76 ft. To check the depth that would be associated with a channel of the optimal width, substitute the optimal width for b and solve for y: ans <- hydraulics::manningt(Q = 360., n = 0.015, m = 1, b = 4.767534, Sf = 0.00088, units = "Eng") cat(sprintf("Optimal depth: %.5f ft\\n", ans$yopt)) #> Optimal depth: 5.75492 ft 5.4 Circular Channels (flowing partially full) Civil engineers encounter many situations with circular pipes that are flowing only partially full, such as storm and sanitary sewers. Figure 5.3: Typical circular cross section The relationships between the depth of water and the values needed in the Manning equation are: Depth (or fractional depth as written here) is described by Equation (5.13) \\[\\begin{equation} \\frac{y}{D}=\\frac{1}{2}\\left(1-\\cos{\\frac{\\theta}{2}}\\right) \\tag{5.13} \\end{equation}\\] Area is described by Equation (5.14) \\[\\begin{equation} A=\\left(\\frac{\\theta-\\sin{\\theta}}{8}\\right)D^2 \\tag{5.14} \\end{equation}\\] (Be sure to use theta in radians.) Wetted perimeter is described by Equation (5.15) \\[\\begin{equation} P=\\frac{D\\theta}{2} \\tag{5.15} \\end{equation}\\] Combining Equations (5.14) and (5.15): \\[\\begin{equation} R=\\frac{D}{4}\\left(1-\\frac{\\sin{\\theta}}{\\theta}\\right) \\tag{5.16} \\end{equation}\\] Top width: \\(B=D\\,sin{\\frac{\\theta}{2}}\\) Substituting Equations (5.14) and (5.16) into the Manning equation, Equation (5.2), produces (5.17). \\[\\begin{equation} \\theta^{-\\frac{2}{3}}\\left(\\theta-\\sin{\\theta}\\right)^\\frac{5}{3}-CnQD^{-\\frac{8}{3}}S^{-\\frac{1}{2}}=0 \\tag{5.17} \\end{equation}\\] where C=20.16 for SI units and C=13.53 for US Customary (English) units. 5.4.1 Solving the Manning equation for a circular pipe in R As was demonstrated with pipe flow, a function could be written with Equation (5.17) and a solver applied to find the value of \\(\\theta\\) for the given flow conditions with a known D, S, n and Q. The value for \\(\\theta\\) could then be used with Equations (5.13), (5.14) and (5.15) to recover geometric values. Hydraulic analysis of circular pipes flowing partially full often account for the value of Manning’s n varying with depth (Camp, 1946); some standards recommend fixed n values, and others require the use of a depth-varying n. The R package hydraulics has implemented those routines to enable these calculations, including using a fixed n (the default) or a depth-varing n. For an existing pipe, a common problem is the determination of the depth, y that a given flow Q, will have given a pipe diameter d, slope S and roughness n. Example 5.4 demonstrates this. Example 5.4 Find the uniform (normal) flow depth, y, for a trapezoidal channel with Q=225 ft3/s, n=0.016, m=2, b=10 ft, S=0.0006. Do this assuming both that Manning n is constant with depth and that it varies with depth. The function manningc from the hydraulics package is used. Any one of the variables in the Manning equation, and related geometric variables, may be treated as an unknown. ans <- hydraulics::manningc(Q=0.01, n=0.013, Sf=0.001, d = 0.2, units="SI", ret_units = TRUE) ans2 <- hydraulics::manningc(Q=0.01, n=0.013, Sf=0.001, d = 0.2, n_var = TRUE, units="SI", ret_units = TRUE) df <- data.frame(Constant_n = unlist(ans), Variable_n = unlist(ans2)) knitr::kable(df, format = "html", digits=3, padding = 0, col.names = c("Constant n","Variable n")) |> kableExtra::kable_styling(full_width = F) Constant n Variable n Q 0.010 0.010 V 0.376 0.344 A 0.027 0.029 P 0.437 0.482 R 0.061 0.060 y 0.158 0.174 d 0.200 0.200 Sf 0.001 0.001 n 0.013 0.014 yc 0.085 0.085 Fr 0.297 0.235 Re 22342.979 20270.210 Qf 0.010 0.010 It is also sometimes convenient to see a cross-section diagram. hydraulics::xc_circle(y = ans$y, d=ans$d, units = "SI") 5.5 Critical flow Critical flow in open channel flow is described in general by Equation (5.4). For any channel geometry and flow rate a convenient plot is a specific energy diagram, which illustrates the different flow depths that can occur for any given specific energy. Specific energy is defined by Equation (5.18). \\[\\begin{equation} E=y+\\frac{V^2}{2g} \\tag{5.18} \\end{equation}\\] It can be interpreted as the total energy head, or energy per unit weight, relative to the channel bottom. For a trapezoidal channel, critical flow conditions occur as described by Equation (5.4). Combining that with trapezoidal geometry produces Equation (5.19) \\[\\begin{equation} \\frac{Q^2}{g}=\\frac{\\left(by_c+m{y_c}^2\\right)^3}{b+2my_c} \\tag{5.19} \\end{equation}\\] where \\(y_c\\) indicates critical flow depth. This is important for understanding what may happen to the water surface when flow encounters an obstacle or transition. For the channel of Example 5.3, the diagram is shown in Figure 5.4. hydraulics::spec_energy_trap( Q = 360, b = 20, m = 1, scale = 4, units = "Eng" ) Figure 5.4: A specific energy diagram for the conditions of Example 5.3. This provides an illustration that for y=3 ft the flow is subcritical (above the critical depth). Specific energy for the conditions of the prior example is \\[E=y+\\frac{V^2}{2g}=3.0+\\frac{5.22^2}{2*32.2}=3.42 ft\\] If the channel bottom had an abrupt rise of \\(E-E_c=3.42-3.03=0.39 ft\\) critical depth would occur over the hump. A rise of anything greater than that would cause damming to occur. Once flow over a hump is critical, downstream of the hump the flow will be in supercritical conditions, flowing at the alternate depth. The specific energy for a given depth y and alternate depth can be added to the plot by including an argument for depth, y, as in Figure 5.5. hydraulics::spec_energy_trap( Q = 360, b = 20, m = 1, scale = 4, y=3.0, units = "Eng" ) Figure 5.5: A specific energy diagram for the conditions of Example 5.3 with an additional y value added. 5.6 Flow in Rectangular Channels When working with rectangular channels the open channel equations simplify, because flow, \\(Q\\), can be expressed as flow per unit width, \\(q = Q/b\\), where \\(b\\) is the channel width. Since \\(Q/A=V\\) and \\(A=by\\), Equation (5.18) can be written as Equation (5.20): \\[\\begin{equation} E=y+\\frac{Q^2}{2gA^2}=y+\\frac{q^2}{2gy^2} \\tag{5.20} \\end{equation}\\] Equation (5.19) for critical depth, \\(y_c\\), also is simplified for rectangular channels to Equation (5.21): \\[\\begin{equation} y_c = \\left({\\frac{q^2}{g}}\\right)^{1/3} \\tag{5.21} \\end{equation}\\] Combining Equation (5.20) and Equation (5.21) shows that at critical conditions, the minimum specific energy is: \\[\\begin{equation} E_{min} = \\frac{3}{2} y_c \\tag{5.22} \\end{equation}\\] Example 5.5, based on an exercise from the open-channel flow text by Sturm (Sturm, 2021), demonstrates how to solve for the depth through a rectangular section when the bottom height changes. Example 5.5 A 0.5 m wide rectangular channel carries a flow of 2.2 m\\(^3\\)/s at a depth of 2 m (\\(y_1\\)=2m). If the channel bottom rises 0.25 m (\\(\\Delta z=0.25~ m\\)), and head loss, \\(h_L\\) over the transition is negligible, what is the depth, \\(y_2\\) after the rise in channel bottom? Figure 5.6: The rectangular channel of Example 5.5 with an increase in channel bottom height downstream. A specific energy diagram is very helpful for establishing upstream conditions and estimating \\(y_2\\). p1 <- hydraulics::spec_energy_trap( Q = 2.2, b = 0.5, m = 0, y = 2, scale = 2.5, units = "SI" ) p1 Figure 5.7: A specific energy diagram for the conditions of Example 5.5. The values of \\(y_c\\) and \\(E_{min}\\) shown in the plot can be verified using Equations (5.21) and (5.22). This should always be checked to describe the incoming flow and what will happen as flow passes over a hump. Since \\(y_1\\) > \\(y_c\\) the upstream flow is subcritical, and flow can be expected to drop as it passes over the hump. Upstream and downstream specific energy are related by Equation (5.23): \\[\\begin{equation} E_1-E_2=\\Delta z + h_L \\tag{5.23} \\end{equation}\\] Since \\(h_L\\) is negligible in this example, the downstream specific energy, \\(E_2\\) is lower that the upper \\(E_1\\) by an amount \\(\\Delta z\\), or \\[\\begin{equation} E_2 = E_1 - \\Delta z \\tag{5.24} \\end{equation}\\] For a 0.25 m rise, and using \\(q = Q/b = 2.2/0.5 = 4.4\\), combining Equation (5.24) and Equation (5.20): \\[E_2 = E_1 - 0.25 = 2 + \\frac{4.4^2}{2(9.81)(2^2)} - 0.25 = 2.247 - 0.25 = 1.997 ~m\\] From the specific energy diagram, for \\(E_2=1.997 ~ m\\) a depth of about \\(y_2 \\approx 1.6 ~ m\\) would be expected, and the flow would continue in subcritical conditions. The value of \\(y_2\\) can be calculated using Equation (5.20): \\[1.997 = y_2 + \\frac{4.4^2}{2(9.81)(y_2^2)}\\] which can be rearranged to \\[0.9967 - 1.997 y_2^2 + y_2^3= 0\\] Solving a polynomial in R is straightforward using the polyroot function and using Re to extract the real portion of the solution. Re(polyroot(c(0.9667, 0, -1.997, 1))) #> [1] 0.9703764 -0.6090519 1.6356755 The negative root is meaningless, the lower positive root is the supercritical depth for \\(E_2 = 1.997 ~ m\\), and the larger positive root is the subcritical depth. Thus the correct solution is \\(y_2 = 1.64 ~ m\\) when the channel bottom rises by 0.25 m. A vertical line or other annotation can be added to the specific energy diagram to indicate \\(E_2\\) using ggplot2 with a command like p1 + ggplot2::geom_vline(xintercept = 1.997, linetype=3). The hydraulics R package can also add lines to a specific energy diagram for up to two depths: p2 <- hydraulics::spec_energy_trap(Q = 2.2, b = 0.5, m = 0, y = c(2, 1.64), scale = 2.5, units = "SI") p2 Figure 5.8: A specific energy diagram for the conditions of Example 5.5 with added annotation for when the bottom elecation rises. The specific energy diagram shows that if \\(\\Delta z > E_1 - E_{min}\\), the downstream specific energy, \\(E_2\\) would be to the left of the curve, so no feasible solution would exist. At that point damming would occur, raising the upstream depth, \\(y_1\\), and thus increasing \\(E_1\\) until \\(E_2 = E_{min}\\). The largest rise in channel bottom height that will not cause damming is called the critical hump height: \\(\\Delta z_{c} = E_1 - E_{min}\\). 5.7 Gradually varied steady flow When water approaches an obstacle, it can back up, with its depth increasing. The effect can be observed well upstream. Similarly, as water approaches a drop, such as with a waterfall, the water level declines, and that effect can also be seen upstream. In general, any change in slope or roughness will produce changes in depth along a channel length. There are three depths that are important to define for a channel: \\(y_c\\), critical depth, found using Equation (5.4) \\(y_0\\), normal depth, found using Equation (5.2) \\(y\\), flow depth, found using Equation (5.5) If \\(y_n < y_c\\) flow is supercritical (for example, flowing down a steep slope); if \\(y_n > y_c\\) flow is subcritical. Variations in the water surface are classified by profile types based on to whether the normal flow is subcritical (or mild sloped, M) or supercritical (steep, S), as in Figure 5.9 (Davidian, Jacob, 1984). Figure 5.9: Types of flow profiles on mild and steep slopes In addition to channel transitions, changes in channel slow of roughness (Manning n) will cause the flow surface to vary. Some of these conditions are illustrated in Figure 5.10 (Davidian, Jacob, 1984). Figure 5.10: Types of flow profiles with changes in slope or roughness Typically, for supercritical flow the calculations start at an upstream cross section and move downstream. For subcritical flow calculations proceed upstream. However, for the direct step method, a negative result will indicate upstream, and a positive result indicates downstream. If the water surface passes through critical depth (from supercritical to subcritical or the reverse) it is no longer gradually varied flow and the methods in this section do not apply. This can happen at abrupt changes in channel slope or roughness, or channel transitions. 5.7.1 The direct step method The direct step method looks at two cross sections in a channel where depths, \\(y_1\\) and \\(y_2\\) are defined. Figure 5.11: A gradually varied flow example. The distance between these two cross-sections, \\({\\Delta}X\\), is calculated using Equation (5.25) \\[\\begin{equation} {\\Delta}X=\\frac{E_1-E_2}{\\overline{S}-S_0} \\tag{5.25} \\end{equation}\\] Where E is the specific energy from Equation (5.18), \\(S_0\\) is the slope of the channel bed, and \\(S\\) is the slope of the energy grade line. \\(\\overline{S}\\) is the average of the S values at each cross section calculated using the Manning equation, Equation (5.2) solved for slope, as in Equation (5.26). \\[\\begin{equation} S=\\frac{n^2\\,V^2}{C^2\\,R^{\\frac{4}{3}}} \\tag{5.26} \\end{equation}\\] Example 5.6 demonstrates this. Example 5.6 Water flows at 10 m3/s in a trapezoidal channel with n=0.015, bottom width 3 m, side slope of 2:1 (H:V) and longitudinal slope 0.0009 (0.09%). At the location of a USGS stream gage the flow depth is 1.4 m. Use the direct step method to find the distance to the point where the depth is 1.2 m and determine whether it is upstream or downstream. Begin by setting up a function to calculate the Manning slope and setting up the input data. #function to calculate Manning slope slope_f <- function(V,n,R,C) { return(V^2*n^2/(C^2*R^(4./3.))) } #Now set up input data ################################## #input Flow Q=10.0 #input depths: y1 <- 1.4 #starting depth y2 <- 1.2 #final depth #Define the number of steps into which the difference in y will be broken nsteps <- 2 #channel geometry: bottom_width <- 3 side_slope <- 2 #side slope is H:V. Use zero for rectangular manning_n <- 0.015 long_slope <- 0.0009 units <- "SI" #"SI" or "Eng" if (units == "SI") { C <- 1 #Manning constant: 1 for SI, 1.49 for US units g <- 9.81 } else { #"Eng" means English, or US system C <- 1.49 g <- 32.2 } #find depth increment for each step, depths at which to solve depth_incr <- (y2 - y1) / nsteps depths <- seq(from=y1, to=y2, by=depth_incr) First check to see if the flow is subcritical or supercritical and find the normal depth. Critical and normal depths can be calculated using the manningt function in the hydraulics package, as in Example 5.2. However, because other functionality of the rivr package is used, these will be calculated using functions from the rivr package. rivr::critical_depth(Q = Q, yopt = y1, g = g, B = bottom_width , SS = side_slope) #> [1] 0.8555011 #note using either depth for yopt produces the same answer rivr::normal_depth(So = long_slope, n = manning_n, Q = Q, yopt = y1, Cm = C, B = bottom_width , SS = side_slope) #> [1] 1.147137 The normal depth is greater than the critical depth, so the channel has a mild slope. The beginning and ending depths are above normal depth. This indicates the profile type, following Figure 5.9, is M-1, so the flow depth should decrease in depth going upstream. This also verifies that the flow depth between these two points does not pass through critical flow, so is a valid gradually varied flow problem. For each increment the \\({\\Delta}X\\) value needs to be calculated, and they need to be accumulated to find the total length, L, between the two defined depths. #loop through each channel segment (step), calculating the length for each segment. #The channel_geom function from the rivr package is helpful L <- 0 for ( i in 1:nsteps ) { #find hydraulic geometry, E and Sf at first depth xc1 <- rivr::channel_geom(y=depths[i], B=bottom_width, SS=side_slope) V1 <- Q/xc1[['A']] R1 <- xc1[['R']] E1 <- depths[i] + V1^2/(2*g) Sf1 <- slope_f(V1,manning_n,R1,C) #find hydraulic geometry, E and Sf at second depth xc2 <- rivr::channel_geom(y=depths[i+1], B=bottom_width, SS=side_slope) V2 <- Q/xc2[['A']] R2 <- xc2[['R']] E2 <- depths[i+1] + V2^2/(2*g) Sf2 <- slope_f(V2,manning_n,R2,C) Sf_avg <- (Sf1 + Sf2) / 2.0 dX <- (E1 - E2) / (Sf_avg - long_slope) L <- L + dX } cat(sprintf("Using %d steps, total distance from depth %.2f to %.2f = %.2f m\\n", nsteps, y1, y2, L)) #> Using 2 steps, total distance from depth 1.40 to 1.20 = -491.75 m The result is negative, verifying that the location of depth y2 is upstream of y1. Of course, the result will become more precise as more incremental steps are included, as shown in Figure 5.12 Figure 5.12: Variation of number of calculation steps to final calculated distance. The direct step method is also implemented in the hydraulics package, and can be applied to the same problem as above, as illustrated in Example 5.7. Example 5.7 Water flows at 10 m3/s in a trapezoidal channel with n=0.015, bottom width 3 m, side slope of 2:1 (H:V) and longitudinal slope 0.0009 (0.09%). At the location of a USGS stream gage the flow depth is 1.4 m. Use the direct step method to find the distance to the point where the depth is 1.2 m and determine whether it is upstream or downstream. hydraulics::direct_step(So=0.0009, n=0.015, Q=10, y1=1.4, y2=1.2, b=3, m=2, nsteps=2, units="SI") #> y1=1.400, y2=1.200, yn=1.147, yc=0.855585 #> Profile type = M1 #> # A tibble: 3 × 7 #> x z y A Sf E Fr #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 0 0 1.4 8.12 0.000407 1.48 0.405 #> 2 -192. 0.173 1.3 7.28 0.000548 1.40 0.466 #> 3 -492. 0.443 1.2 6.48 0.000753 1.32 0.541 This produces the same result, and verifies that the water surface profile is type M-1. 5.7.2 Standard step method The standard step method works similarly to the direct step method, except from one known depth the second depth is determined at a known distance, L. This is a preferred method when the depth at a critical location, such as a bridge, is needed. The rivr package implements the standard step method in its compute_profile function. To compare it to the direct step method, check the depth at \\(y_2\\) given the total distance from Example 5.6. Example 5.8 For the same channel and flow rate as Example 5.6, determine the depth of water at the distance L determined above. The function requires the distance to be positive, so apply the absolute value to the L value. dist = abs(L) ans <- rivr::compute_profile(So = long_slope, n = manning_n, Q = Q, y0 = y1, Cm = C, g = g, B = bottom_width, SS = side_slope, stepdist = dist/nsteps, totaldist = dist) #Distances along the channel where depths were determined ans$x #> [1] 0.0000 -245.8742 -491.7483 #Depths at each distance ans$y #> [1] 1.400000 1.277009 1.200592 This shows the distances and depths at each of the steps defined. Consistent with the above, the distances are negative, showing that they are progressing upstream. The results are identical for \\(y_2\\) using the direct step method. 5.8 Rapidly varied flow (the hydraulic jump) Figure 5.13: A hydraulic jump at St. Anthony Falls, Minnesota. In the discussion of critical flow in Section 5.5, the concept of alternate depths was introduced, where a given flow rate in a channel with known geometry typically may assume two possible values, one subcritical and one supercritical. For the case of supercritical flow transitioning to subcritical flow, a smooth transition is impossible, so a hydraulic jump occurs. A hydraulic jump always dissipates some of the incoming energy. A hydraulic jump is depicted in Figure 5.14 (Peterka, Alvin J., 1978). Figure 5.14: A typical hydraulic jump. 5.8.1 Sequent (or conjugate) depths The two depths on either side of a hydraulic jump are called sequent depths or conjugate depths. The relationship between them can be established using the momentum equation to develop an general expression (for any open channel) for the momentum function, M, as in Equation (5.27). \\[\\begin{equation} M=Ah_c+\\frac{Q^2}{gA} \\tag{5.27} \\end{equation}\\] where \\(h_c\\) is the distance from the water surface to the centroid of the channel cross-section. For a trapezoidal channel, the momentum equation becomes that described by Equation (5.28). \\[\\begin{equation} M=\\frac{by^2}{2}+\\frac{my^3}{3}+\\frac{Q^2}{gy\\left(b+my\\right)} \\tag{5.28} \\end{equation}\\] For the case of a rectangular channel, setting m=0 and setting the Momentum function for two sequent depths, y1 ans y2 equal, produces the relationship in Equation (5.29). \\[\\begin{equation} \\frac{y_2}{y_1}=\\frac{1}{2}\\left(-1+\\sqrt{1+8Fr_1^2}\\right) or \\frac{y_1}{y_2}=\\frac{1}{2}\\left(-1+\\sqrt{1+8Fr_2^2}\\right) \\tag{5.29} \\end{equation}\\] where Frn is the Froude Number [Equation (5.1)] at section n. Again, for the case of a rectangular channel, the energy head loss through a hydraulic jump simplifies to Equation (5.30). \\[\\begin{equation} h_l=\\frac{\\left(y_2-y_1\\right)^3}{4y_1y_2} \\tag{5.30} \\end{equation}\\] Given that the momentum function must be conserved on either side of a hydraulic jump, finding the sequent depth for any known depth becomes straightforward for trapezoidal shapes. Setting M1 = M2 in Equation (5.28) allows the use of a solver, as in Example 5.9. Example 5.9 A trapezoidal channel with a bottom width of 0.5 m and a side slope of 1:1 carries a flow of 0.2 m3/s. The depth on one side of a hydraulic jump is 0.1 m. Find the sequent depth, the energy head loss, and the power dissipation in Watts. flow <- 0.2 ans <- hydraulics::sequent_depth(Q=flow,b=0.5,y=0.1,m=1,units = "SI", ret_units = TRUE) #print output of function as.data.frame(ans) #> ans #> y 0.1 [m] #> y_seq 0.3941009 [m] #> yc 0.217704 [m] #> Fr 3.635731 [1] #> Fr_seq 0.3465538 [1] #> E 0.666509 [m] #> E_seq 0.4105265 [m] #Find energy head loss hl <- abs(ans$E - ans$E_seq) hl #> 0.2559825 [m] #Express this as a power loss gamma <- hydraulics::specwt(units = "SI") P <- gamma*flow*hl cat(sprintf("Power loss = %.1f Watts\\n",P)) #> Power loss = 501.4 Watts The energy loss across hydraulic jumps varies with the Froude number of the incoming flow, as shown in depicted in Figure 5.15 (Peterka, Alvin J., 1978). Figure 5.15: Types of hydraulic jumps. 5.8.2 Location of a hydraulic jump In hydraulic infrastructure where hydraulic jumps will occur there are usually engineered features, such as baffles or basins, to force a hydraulic jump to occur in specific locations, to protect downstream waterways from the turbulent effects of an uncontrolled hydraulic jump. In the absence of engineered features to cause a jump, the location of a hydraulic jump can be determined using the concepts of Sections 5.7 and 5.8. Example 5.10 demonstrates the determination of the location of a hydraulic jump when normal flow conditions exist at some distance upstream and downstream of the jump. Example 5.10 A rectangular (a trapezoid with side slope, m=0) concrete channel with a bottom width of 3 m carries a flow of 8 m3/s. The upstream channel slopes steeply at So=0.018 and discharges onto a mild slope of So=0.0015. Determine the height of the jump and its location. First find the normal depth on each slope, and the critical depth for the channel. yn1 <- hydraulics::manningt(Q = 8, n = 0.013, m = 0, Sf = 0.018, b = 3, units = "SI")$y yn2 <- hydraulics::manningt(Q = 8, n = 0.013, m = 0, Sf = 0.0015, b = 3, units = "SI")$y yc <- hydraulics::manningt(Q = 8, n = 0.013, m = 0, Sf = 0.0015, b = 3, units = "SI")$yc cat(sprintf("yn1 = %.3f m, yn2 = %.3f m, yc = %.3f m\\n", yn1, yn2, yc)) #> yn1 = 0.498 m, yn2 = 1.180 m, yc = 0.898 m Recall that the calculation of yc only depends on flow and channel geometry (Q, b, m), so the values of n and Sf can be arbitrary for that command. These results confirm that flow is supercritical upstream and subcritical downstream, so a hydraulic jump will occur. The hydraulic jump will either begin at yn1 (and jump to the sequent depth for yn1) or end at yn2 (beginning at the sequent depth for yn2). The possibilities are shown in Figure 5.9 in the lower right panel. First check the two sequent depths. yn1_seq <- hydraulics::sequent_depth(Q = 8, b = 3, y=yn1, m = 0, units = "SI")$y_seq yn2_seq <- hydraulics::sequent_depth(Q = 8, b = 3, y=yn2, m = 0, units = "SI")$y_seq cat(sprintf("yn1_seq = %.3f m, yn2_seq = %.3f m\\n", yn1_seq, yn2_seq)) #> yn1_seq = 1.476 m, yn2_seq = 0.666 m This confirms that if the jump began at yn1 (on the steep slope) it would need to jump a level below yn2, with an S-1 curve providing the gradual increase in depth to yn2. Since yn1_seq exceeds yn2, this is not possible. That can be verified using the direct_step function to show the distance from yn1_seq to yn2 would need to be upstream (negative x values in the result), which cannot occur for this case. This means the alternate case must exist, with an M-3 profile raising yn1 to yn2_seq at which point the jump occurs. The direct step method can find this distance along the channel. hydraulics::direct_step(So=0.0015, n=0.013, Q=8, y1=yn1, y2=yn2_seq, b=3, m=0, nsteps=2, units="SI") #> # A tibble: 3 × 7 #> x z y A Sf E Fr #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 0 0 0.498 1.49 0.0180 1.96 2.42 #> 2 23.4 -0.0350 0.582 1.75 0.0113 1.65 1.92 #> 3 44.6 -0.0669 0.666 2.00 0.00761 1.48 1.57 The number of calculation steps (nsteps) can be increased for greater precision, but 2 steps is adequate here. "],["momentum-in-water-flow.html", "Chapter 6 Momentum in water flow 6.1 Equations of linear momentum 6.2 The momentum equation in pipe design", " Chapter 6 Momentum in water flow When moving water changes direction or velocity, an external force must be associated with the change. In civil engineering infrastructure this is ubiquitous and the forces associated with this must be accounted for in design. Figure 6.1: Water pipe on Capitol Hill, Seattle. 6.1 Equations of linear momentum Newton’s law relates the forces applied to a body to the rate of change of linear momentum, as in Equation (6.1) \\[\\begin{equation} \\sum{\\overrightarrow{F}}=\\frac{d\\left(m\\overrightarrow{V}\\right)}{dt} \\tag{6.1} \\end{equation}\\] For fluid flow in a hydraulic system carrying a flow Q, the equation can be written in any linear direction (x-direction in this example) as in Equation (6.2). \\[\\begin{equation} \\sum{F_x}=\\rho{Q}\\left(V_{2x}-V_{1x}\\right) \\tag{6.2} \\end{equation}\\] where \\(\\rho{Q}\\) is the mass flux through the system, \\(V_{1x}\\) is the velocity in the x-direction where flow enters the system, and \\(V_{2x}\\) is the velocity in the x-direction where flow leaves the system. \\(\\sum{F_x}\\) is the vector sum of all external forces acting on the system in the x-direction. It should be noted that the values of V are the average cross-sectional velocity. A momentum correction factor (\\(\\beta\\)), can be applied when the velocity is highly non-uniform across the cross-section. In nearly all civil engineering applications the adjustment factor is close enough to 1 where it is ignored in the calculations. 6.2 The momentum equation in pipe design One of the most common civil engineering applications of the momentum equation is providing the lateral restraint where a pipe bend occurs. One approach to provide the external force to keep the pipe in equilibrium is to use a thrust block, as illustrated in Figure 6.2 (Ductile Iron Pipe Research Association, 2016). Figure 6.2: A sketch of a pipe bend with a thrust block. Example 6.1 A horizontal 18-inch diameter pipe carries flow Q of water at 68\\(^\\circ\\)F with a pressure of 60 psi and encounters a bend of angle \\(\\theta=30^\\circ\\). Show how the reaction force, R varies with the flow rate through the bend for flows up to 20 ft3/s. Ignore head loss through the bend. Taking the control volume to be the bend, the external forces acting on the bend are shown in Figure 6.3. Figure 6.3: External forces on the pipe. Note that if the pipe were not horizontal, the weight of the water in the pipe would also need to be included. Including all of the external forces in the x-direction on left side of Equation (6.2) and recognizing that V1x=V1 and V2x=V2cos\\(\\theta\\) produces: \\[P_1A_1-P_2A_2cos\\theta-R_x=\\rho{Q}\\left(V_{2}cos\\theta-V_{1}\\right)\\] Rearranging to solve for Rx gives Equation (6.3). \\[\\begin{equation} R_x=P_1A_1-P_2A_2cos\\theta-\\rho{Q}\\left(V_{2}cos\\theta-V_{1}\\right) \\tag{6.3} \\end{equation}\\] Similarly in the y-direction Equation (6.4) can be assembled, noting that V1y=0 and V2y=\\(-\\)V2sin\\(\\theta\\) . \\[\\begin{equation} R_y=P_2A_2sin\\theta-\\rho{Q}\\left(-V_{2}sin\\theta\\right) \\tag{6.4} \\end{equation}\\] This can be set up in R in many ways, such as the following. #Input Data -- ensure units are consistent in ft, lbf (pound force), sec D1 <- units::set_units(18/12, ft) D2 <- units::set_units(18/12, ft) P1 <- units::set_units(60*144, lbf/ft^2) #convert psi to lbf/ft^2 P2 <- units::set_units(60*144, lbf/ft^2) theta <- 30*(pi/180) #convert to radians for sin, cos functions rho <- hydraulics::dens(T=68, units="Eng", ret_units = TRUE) # calculations - vary flow from 0 to 20 ft^3/s Q <- units::set_units(seq(0,20,1), ft^3/s) A1 <- pi/4*D1^2 A2 <- pi/4*D2^2 V1 <- Q/A1 V2 <- Q/A2 Rx <- P1*A1-P2*A2*cos(theta)-rho*Q*(V2*cos(theta)-V1) Ry <- P2*A2*sin(theta)-rho*Q*(-V2*sin(theta)) R <- sqrt(Rx^2 + Ry^2) plot(Q,R) When Q=0, only the pressure terms contribute to R. This plot shows that for typical water main conditions the change in direction of the velocity vectors adds a small amount (less than 3% in this example) to the calculated R value. This is why design guidelines for water mains often neglect the velocity term in Equation (6.2). In other industrial or laboratory conditions it may not be valid to neglect that term. "],["pumps-and-how-they-operate-in-a-hydraulic-system.html", "Chapter 7 Pumps and how they operate in a hydraulic system 7.1 Defining the system curve 7.2 Defining the pump characteristic curve 7.3 Finding the operating point", " Chapter 7 Pumps and how they operate in a hydraulic system For any system delivering water through circular pipes with the assistance of a pump, the selection of the pump requires a consideration of both the pump characteristics and the energy required to deliver different flow rates through the system. These are described by the system and pump characteristic curves. Where they intersect defines the operating point, the flow and (energy) head at which the pump would operate in that system. 7.1 Defining the system curve Figure 7.1: A simple hydraulic system (from https://www.castlepumps.com) For a simple system the loss of head (energy per unit weight) due to friction, \\(h_f\\), is described by the Darcy-Weisbach equation, which can be simplified as in Equation (7.1). \\[\\begin{equation} h_f = \\frac{fL}{D}\\frac{V^2}{2g} = \\frac{8fL}{\\pi^{2}gD^{5}}Q^{2} = KQ{^2} \\tag{7.1} \\end{equation}\\] The total dynamic head the system requires a pump to provide, \\(h_p\\), is found by solving the energy equation from the upstream reservoir (point 1) to the downstream reservoir (point 2), as in Equation (7.2). \\[\\begin{equation} h_p = \\left(z+\\frac{P}{\\gamma}+\\frac{V^2}{2g}\\right)_2 - \\left(z+\\frac{P}{\\gamma}+\\frac{V^2}{2g}\\right)_1+h_f \\tag{7.2} \\end{equation}\\] For the simple system in Figure 7.1, the velocity can be considered negligible in both reservoirs 1 and 2, and the pressures at both reservoirs is atmospheric, so the Equation (7.2) can be simplified to (7.3). \\[\\begin{equation} h_p = \\left(z_2 - z_1\\right) + h_f=h_s+h_f=h_s+KQ^2 \\tag{7.3} \\end{equation}\\] Using the hydraulics package, the coefficient, K, can be calculated manually or using other package functions for friction loss in a pipe system using \\(Q=1\\). Using this to develop a system curve is demonstrated in Example 7.1. Example 7.1 Develop a system curve for a pipe with a diameter of 20 inches, length of 3884 ft, and absolute roughness of 0.0005 ft. Use kinematic viscocity, \\(\\nu\\) = 1.23 x 10-5 ft2/s. Assume a static head, z2 - z1 = 30 ft. ans <- hydraulics::darcyweisbach(Q = 1,D = 20/12, L = 3884, ks = 0.0005, nu = 1.23e-5, units = "Eng") cat(sprintf("Coefficient K: %.3f\\n", ans$hf)) #> Coefficient K: 0.160 scurve <- hydraulics::systemcurve(hs = 30, K = ans$hf, units = "Eng") print(scurve$eqn) #> [1] "h == 30 + 0.16*Q^2" For this function of the hydraulics package, Q is either in ft\\(^3\\)/s or m\\(^3\\)/s, depending on whether Eng or SI is specified for units. Often data for flows in pumping systems are in other units such as gpm or liters/s, so unit conversions would need to be applied. 7.2 Defining the pump characteristic curve The pump characteristic curve is based on data or graphs obtained from a pump manufacturer, such as that depicted in Figure 7.2. Figure 7.2: A sample set of pump curves (from https://www.gouldspumps.com). The three red dots are points selected to approximate the curve The three selected points, selected manually across the range of the curve, are used to generate a polynomial fit to the curve. There are many forms of equations that could be used to fit these three points to a smooth, continuous curve. Three common ones are implemented in the hydraulics package, shown in Table 7.1. Table 7.1: Common equation forms for pump characteristic curves. type Equation poly1 \\(h=a+{b}{Q}+{c}{Q}^2\\) poly2 \\(h=a+{c}{Q}^2\\) poly3 \\(h_{shutoff}+{c}{Q}^2\\) The \\(h_{shutoff}\\) value is the pump head at \\(Q={0}\\). Many methods can be used to fit a polynomial to a set of points. The hydraulics package includes the pumpcurve function for this purpose. The coordinates of the points can be input as numeric vectors, being careful to use correct units, consistent with those used for the system curve. Manufacturer’s pump curves often use units for flow that are not what the hydraulics package needs, and the units package provides a convenient way to convert them as needed. Developing the pump characteristic curve using the hydraulics package is demonstrated in Example 7.2. Example 7.2 Develop a pump characteristic curve for the pump in Figure 7.2, using the three points marked in red. Use the poly2 form from Table 7.1. qgpm <- units::set_units(c(0, 5000, 7850), gallons/minute) #Convert units to those needed for package, and consistent with system curve qcfs <- units::set_units(qgpm, ft^3/s) #Head units, read from the plot, are already in ft so setting units is not needed hft <- c(81, 60, 20) pcurve <- hydraulics::pumpcurve(Q = qcfs, h = hft, eq = "poly2", units = "Eng") print(pcurve$eqn) #> [1] "h == 82.5 - 0.201*Q^2" The function pumpcurve returns a pumpcurve object that includes the polynomial fit equation and a simple plot to check the fit. This can be plotted as in Figure 7.3 pcurve$p Figure 7.3: A pump characteristic curve 7.3 Finding the operating point The two curves can be combined to find the operating point of the selected pump in the defined system. this can be done by plotting them manually, solving the equations simultaneously, or by using software. The hydraulics package finds the operating point using the system and pump curves defined earlier. Example 7.3 shown how this is done. Example 7.3 Find the operating point for the pump and system curves developed in Examples 7.1 and 7.2. oppt <- hydraulics::operpoint(pcurve = pcurve, scurve = scurve) cat(sprintf("Operating Point: Q = %.3f, h = %.3f\\n", oppt$Qop, oppt$hop)) #> Operating Point: Q = 12.051, h = 53.285 The function operpoint function returns an operpoint object that includes the a plot of both curves. This can be plotted as in Figure 7.4 oppt$p Figure 7.4: The pump operating point "],["the-hydrologic-cycle-and-precipitation.html", "Chapter 8 The hydrologic cycle and precipitation 8.1 Precipitation observations 8.2 Precipitation frequency 8.3 Precipitation gauge consistency – double mass curves 8.4 Precipitation interpolation and areal averaging", " Chapter 8 The hydrologic cycle and precipitation All of the earlier chapters of this book dealt with the behavior of water in different hydraulic systems, such as canals or pipes. Now we consider the bigger picture of where the water originates, and ultimately how we can estimate how much water is available for different uses, and how much excess (flood) water systems will need to be designed and built to accommodate. A fundamental concept is the hydrologic cycle, depicted in Figure 8.1. Figure 8.1: The hydrologic cycle, from the USGS The primary variable in the hydrologic cycle from an engineering perspective is precipitation, since that is the source of the water used and managed in engineered systems. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 8.1 Precipitation observations Direct measurement of precipitation is done with precipitation gauges, such as shown in Figure 8.2. Figure 8.2: National Weather Service standard 8-inch gauge (source: NWS). Precipitation can vary dramatically over short distances, so point measurements are challenging to work with when characterizing rainfall over a larger area. An image from an atmospheric river event over California is shown in Figure 8.3. Reflectivity values are converted to precipitation rates based on calibration with rain gauge observations. Figure 8.3: A raw radar image showing reflectivity values. Red squared indicated weather radar locations (source: NOAA). There are additional data sets that merge many sources of data to create continuous (spatially and temporally) datasets of precipitation. While these provide excellent resources for large scale studies, we will initially focus on point observations. Obtaining precipitation data can be done in many ways. Example 8.1 demonstrates one method using R using the FedData package. Example 8.1 Characterize the rainfall in the city of San Jose, in Santa Clara County. For the U.S., a good starting point is to use the mapping tools at the NOAA Climate Data Online (CDO) website. From the mapping tools page, select Observations: Daily ensure GHCN Daily is checked so you’ll look for stations that are part of the Global Historical Climatology Network and search for San Jose, CA. Figure 8.4 shows the three stations that lie within the rectangle sketched on the map, and the one that was selected. Figure 8.4: Selection results for a portion of San Jose, CA (source: CDO). The data can be downloaded directly from the CDO site as a csv file, a sample of which is included with the hydromisc package (the sample also includes air temperature data). Note the units that you specify for the data since they will not appear in the csv file. Note that this initial station search and data download can be automated in R using other packages: Using the FedData package, following a method similar to this. Using the rnoaa package, referring to the vignettes. While formats will vary depending on the source of the data, in this example we can import the csv file directly. Since units were left as ‘standard’ on the CDO website, precipitation is in inches and temperatures in oF. datafile <- system.file("extdata", "cdo_data_ghcn_23293.csv", package="hydromisc") ghcn_data <- read.csv(datafile,header=TRUE) A little cleanup of the data needs to be done to ensure the DATE column is in date format, and change any missing values (often denoted as 9999 or -9999) to NA. With missing values flagged as NA, R can ignore them, set them to zero, or fill them in with functions like the zoo::na.approx() or na.spline() functions, or using the more sophisticated imputeTS package. finally, add a ‘water year’ column (a water year begins on October 1 and ends September 30). ghcn_data$DATE <- as.Date(ghcn_data$DATE, format="%Y-%m-%d") ghcn_data$PRCP[ghcn_data$PRCP <= -999 | ghcn_data$PRCP >= 999 ] = NA wateryr <- function(d) { if (as.numeric(format(d, "%m")) >= 10) { wy = as.numeric(format(d, "%Y")) + 1 } else { wy = as.numeric(format(d, "%Y")) } } ghcn_data$wy <- sapply(ghcn_data$DATE, wateryr) A convenient package for characterizing precipitation is hydroTSM, the output of which is shown in Figure 8.5 library(hydroTSM) #create a simple data frame for plotting ghcn_prcp <- data.frame(date = ghcn_data$DATE, prcp = ghcn_data$PRCP ) #convert it to a zoo object x <- zoo::read.zoo(ghcn_prcp) hydroTSM::hydroplot(x, var.type="Precipitation", main="", var.unit="inch", pfreq = "ma", from="1999-01-01", to="2022-12-31") Figure 8.5: Monthly and annual precipitation summary for San Jose, CA for 1999-2022 This presentation shows the seasonality of rainfall in San Jose, with most falling between October and May. The mean is about 12 inches per year, with most years experiencing between 10-15 inches of precipitation. There are functions to produce many statistics such as monthly means. #calculate monthly sums monsums <- hydroTSM::daily2monthly(x, sum, na.rm = TRUE) monavg <- as.data.frame(hydroTSM::monthlyfunction(monsums, mean, na.rm = TRUE)) #if record begins in a month other than January, need to reorder monavg <- monavg[order(factor(row.names(monavg), levels = month.abb)),,drop=FALSE] colnames(monavg)[1] <- "Avg monthly precip, in" knitr::kable(monavg, digits = 2) |> kableExtra::kable_paper(bootstrap_options = "striped", full_width = F) Avg monthly precip, in Jan 2.23 Feb 2.26 Mar 1.75 Apr 1.03 May 0.26 Jun 0.10 Jul 0.00 Aug 0.00 Sep 0.10 Oct 0.60 Nov 1.21 Dec 2.31 The winter of 2016-2017 (water year 2017) was a record wet year for much of California. Figure 8.6 shows a hyetograph the daily values for that year. library(ggplot2) ghcn_prcp2 <- data.frame(date = ghcn_data$DATE, wy = ghcn_data$wy, prcp = ghcn_data$PRCP ) ggplot(subset(ghcn_prcp2, wy==2017), aes(x=date, y=prcp)) + geom_bar(stat="identity",color="red") + labs(x="", y="precipitation, inch/day") + scale_x_date(date_breaks = "1 month", date_labels = "%b %d") Figure 8.6: Daily Precipitation for San Jose, CA for water year 2017 While many other statistics could be calculated to characterize precipitation, only a handful more will be shown here. One will use a convenient function of the seas package. This is used in Figure 8.7. library(tidyverse) #The average precipitation rate for rainy days (with more then 0.01 inch) avgrainrate <- ghcn_prcp2[ghcn_prcp2$prcp > 0.01,] |> group_by(wy) |> summarise(prcp = mean(prcp)) #the number of rainy days per year nraindays <- ghcn_prcp2[ghcn_prcp2$prcp > 0.01,] |> group_by(wy) |> summarise(nraindays = length(prcp)) #Find length of consecutive dry and wet spells for the record days.dry.wet <- seas::interarrival(ghcn_prcp, var = "prcp", p.cut = 0.01, inv = FALSE) #add a water year column to the result days.dry.wet$wy <- sapply(days.dry.wet$date, wateryr) res <- days.dry.wet |> group_by(wy) |> summarise(cdd = median(dry, na.rm=TRUE), cwd = median(wet, na.rm=TRUE)) res_long <- pivot_longer(res, -wy, names_to="statistic", values_to="consecutive_days") ggplot(res_long, aes(x = wy, y = consecutive_days)) + geom_bar(aes(fill = statistic),stat = "identity", position = "dodge")+ xlab("") + ylab("Median consecutive days") Figure 8.7: Median concecutive dry days (cdd) and wet days (cwd) for each water year. 8.2 Precipitation frequency For engineering design, the uncertainty in predicting extreme rainfall, floods, or droughts is expressed as risk, typically the probability that a certain event will be equalled or exceeded in any year. The return period, T, is the inverse of the probability of exceedence, so that a storm with a 10% chance of being exceeded in any year (\\(p_{exceed}~=0.10\\)) is a \\(T=\\frac{1}{0.10}=10\\) year storm. A 10-year storm can be experienced in multiple consecutive years, so it only means that, on average over very long periods (in a stationary climate) one would expect to see one event every T years. In the U.S., precipitation frequency statistics are available at the NOAA Precipitation Frequency Data Server (PFDS). An example of the graphical data available there is shown in Figure 8.8. Figure 8.8: Intensity-duration-frequency (IDF) curves from the NOAA PFDS. The calculations performed to produce the IDF curves use decades of daily data, because many years are needed to estimate the frequency with which an event might occur. As a demonstration, however, a single year can be used to illustrate the relationship between intensity and duration, which for durations longer than about 2 hours (McCuen, 2016) can be expressed as in Equation (8.1). \\[\\begin{equation} i = aD^b \\tag{8.1} \\end{equation}\\] As a power curve, Equation (8.1) should be a straight line on a log-log plot. This is shown in Example 8.2. Example 8.2 Use the 2017 water year of rainfall data for the city of San Jose, to plot the relationship between intensity and duration for the 1, 3, 7, and 30-day events. Begin by calculating the necessary intensity and duration values. #First extract one water year of data df.one.year <- subset(ghcn_prcp, date>=as.Date("2016-10-01") & date<=as.Date("2017-09-30")) #Calculate the running mean value for the defined durations dur <- c(1,3,7,30) px <- numeric(length(dur)) for (i in 1:4) { px[i] <- max(zoo::rollmean(df.one.year$prcp,dur[i])) } #create the intensity-duration data frame df.id <- data.frame(duration=dur,intensity=px) Fit the theoretical curve (Equation (8.1)) using the nonlinear least squares function of the stats package (included with a base R installation), and plot the results. #fit a power curve to the data fit <- stats::nls(intensity ~ a*duration^b, data=df.id, start=list(a=1,b=-0.5)) print(signif(coef(fit),3)) #> a b #> 1.850 -0.751 #find estimated y-values using the fit df.id$intensity_est <- predict(fit, list(x = df.id$duration)) #duration-intensity plot with base graphics plot(x=df.id$duration,y=df.id$intensity,log='xy', pch=1, xaxt="n", xlab="Duration, day" , ylab="Intensity, inches/day") lines(x=df.id$duration,y=df.id$intensity_est,lty=2) abline( h = c(seq( 0.1,1,0.1),2.0), lty = 3, col = "lightgray") abline( v = c(1,2,3,4,5,7,10,15,20,30), lty = 3, col = "lightgray") axis(side = 1, at =c(1,2,3,4,5,7,10,15,20,30) ,labels = T) axis(side = 2, at =c(seq( 0.1,1,0.1),2.0) ,labels = T) Figure 8.9: Intensity-duration relationship for water year 2017. Calculated values are based on daily data; theoretical is the power curve fit. If this were done for many years, the results for any one duration could be combined (one value per year) and sorted in decreasing order. That means the rank assigned to the highest value would be 1, and the lowest value would be the number of years, n. The return period, T, for any event would then be found using Equation (8.2) based on the Weibull plotting position formula. \\[\\begin{equation} T=\\frac{n+1}{rank} \\tag{8.2} \\end{equation}\\] That would allow the creation of IDF curves for a point. 8.3 Precipitation gauge consistency – double mass curves The method of using double mass curves to identify changes in an obervation method (such as new instrumentation or a change of location) can be applied to precipitation gauges or any other type of measurement. This method is demonstrated with an example from the U.S. Geological survey (Searcy & Hardison, 1960). The first step is to compile data for a gauge (or better, a set of gauges) that are known to be unperturbed (Station A in the sample data set), and for a suspect gauge though to have experienced a change (Station X is this case). annual_data <- hydromisc::precip_double_mass knitr::kable(annual_data, digits = 2) |> kableExtra::kable_paper(bootstrap_options = "striped", full_width = F) Year Station_A Station_X 1926 39.75 32.85 1927 29.57 28.08 1928 42.01 33.51 1929 41.39 29.58 1930 31.55 23.76 1931 55.54 58.39 1932 48.11 46.24 1933 39.85 30.34 1934 45.40 46.78 1935 44.89 38.06 1936 32.64 42.82 1937 45.87 37.93 1938 46.05 50.67 1939 49.76 46.85 1940 47.26 50.52 1941 37.07 34.38 1942 45.89 47.60 Accumulate the (annual) precipitation (measured in inches) and plot the values for the suspect station against the reference station(s), as in Figure 8.10 . annual_sum <- data.frame(year = annual_data$Year, sum_A = cumsum(annual_data$Station_A), sum_X = cumsum(annual_data$Station_X)) #create scatterplot with a label on every point library(ggplot2) library(ggrepel) #> Warning: package 'ggrepel' was built under R version 4.2.3 ggplot(annual_sum, aes(sum_X,sum_A, label = year)) + geom_point() + geom_text_repel(size=3, direction = "y") + labs(x="Cumulative precipitation at Station A, in", y="Cumulative precipitation at Station X, in") + theme_bw() Figure 8.10: A double mass curve. The break in slope between 1930 and 1931 appears clear. This should checked with records for the station to verify whether changes did occur at that time. If the data from Station X are to be used to fill other records or estimate long-term averages, the inconsistency needs to be corrected. One method to highlight the year at which the break occurs is to plot the residuals from a best fit line to the cumulative data from the two stations, as illustrated by the Food and Agriculture Orgainization FAO. (Allen & United Nations, 1998) linfit = lm(sum_X ~ sum_A, data = annual_sum) plot(x=annual_sum$year,linfit$residuals, xlab = "Year",ylab = "Residual of regression") Figure 8.11: Residuals of the linear fit to the double-mass curve. This verifies that after 1930 the steep decline ends, so it may represent a change in location or equipment. Adusting the earlier record to be consistent with the later period is done by applying Equation (8.3). \\[\\begin{equation} y^{'}_i~=\\frac{b_2}{b_1}y_i \\tag{8.3} \\end{equation}\\] where b2 and b1 are the slopes after and before the break in slope, respectively, yi is original precipitation data, and y’i is the adjusted precipitation. This can be applied as follows. b1 <- lm(sum_X ~ sum_A, data = subset(annual_sum, year <= 1930))$coefficients[['sum_A']] b2 <- lm(sum_X ~ sum_A, data = subset(annual_sum, year > 1930))$coefficients[['sum_A']] #Adjust early values and concatenate to later values for Station X adjusted_X <- c(annual_data$Station_X[annual_data$Year <= 1930]*b2/b1, annual_data$Station_X[annual_data$Year > 1930]) annual_sum_adj <- data.frame(year = annual_data$Year, sum_A = cumsum(annual_data$Station_A), sum_X = cumsum(adjusted_X)) #Check that slope now appears more consistent ggplot(annual_sum_adj, aes(sum_X,sum_A, label = year)) + geom_point() + geom_text_repel(size=3, direction = "y") + labs(x="Cumulative precipitation at Station A, in", y="Cumulative adjusted precipitation at Station X, in") + theme_bw() Figure 8.12: A double mass curve using adjusted data at Station X. The plot shows a more consistent slope, as expected. Another plot of residuals could also validate the effect of the adjustment. 8.4 Precipitation interpolation and areal averaging It is rare that there are precipitation observations exactly where one needs data, which means existing observations must be interpolated to a point of interest. This is also used to fill in missing data in a record using surrounding observations. Interpolation is also used to use sparse observations, or observations from a variety of sources, to produce a spatially continuous grid. This is an essential step to estimating the precipitation averaged across an area that contributes streamflow to some location of concern. Estimating areal average precipitation using some simple, manual methods, has been outlined by the U.S. National Weather Service, illustrated in Figure 8.13 (source: National Weather Service). Figure 8.13: Some basic precipitation interpolation methods, from the U.S. National Weather Service. With the advent of geographical information system (GIS) software, manual interpolation is not used. Rather, more advanced spatial analysis is performed to interpolate precipitation onto a continuous grid, where the uncertainty (or skill) of different methods can be assessed. Spatial analysis methods to do this are outlined in many other references, such as Spatial Data Science and the related book Spatial Data Science with applications in R, or the reference Geocomputation with R. (Lovelace et al., 2019; Pebesma & Bivand, 2023) There are also many sources of precipitation data already interpolated to a regular grid. the geodata package provides access to many data sets, including the Worldclim biophysical data. Another source of global precipitation data, available at daily to monthly scales, is the CHIRPS data set, which has been widely used in many studies. An example of obtaining and plotting average annual precipitation over Santa Clara County is illustrated below. #Load precipitation in mm, already cropped to cover most of California datafile <- system.file("extdata", "prcp_cropped.tif", package="hydromisc") prcp <- terra::rast(datafile) scc_bound <- terra::vect(hydromisc::scc_county) scc_precip <- terra::crop(prcp, scc_bound) terra::plot(scc_precip, plg=list(title="Precip\\n(mm)", title.cex=0.7)) terra::plot(scc_bound, add=TRUE) Figure 8.14: Annual Average Precipitation over Santa Clara County, mm Spatial statistics are easily obtained using terra, a versatile package for spatial analysis. terra::summary(scc_precip) #> chirps.v2.0.1981.2020.40yrs #> Min. : 197.1 #> 1st Qu.: 354.9 #> Median : 447.9 #> Mean : 542.3 #> 3rd Qu.: 652.3 #> Max. :1297.2 #> NA's :5 "],["fate-of-precipitation.html", "Chapter 9 Fate of precipitation 9.1 Interception 9.2 Infiltration 9.3 Evaporation 9.4 Snow 9.5 Watershed analysis", " Chapter 9 Fate of precipitation As precipitation falls and can be caught on vegetation (interception), percolate into the ground (infiltration), return to the atmosphere (evaporation), or become available as runoff (if accumulating as rain or snow). The landscape (land cover and topography) and the time scale of study determine what processes are important. For example, for estimating runoff from an individual storm, interception is likely to be small, as is evaporation. On an annual average over large areas, evaporation will often be the largest component. Comprehensive hydrology models will estimate abstractions due to infiltration and interception, either by simulating the physics of the phenomenon or by using a lumped parameter that accounts for the effects of abstractions on runoff. The hydromisc package will need to be installed to access some of the code and data used below. If it is not installed, do so following the instructions on the github site for the package. 9.1 Interception Figure 9.1: Rain interception by John Robert McPherson, CC BY-SA 4, via Wikimedia Commons Interception of rainfall is generally small during individual storms (0.5-2 mm), so it is often ignored, or lumped in with other abstractions, for analyses of flood hydrology. For areas characterized by low intensity rainfall and heavy vegetation, interception can account for a larger portion of the rainfall (for example, up to 25% of annual rainfall in the Pacific Northwest) (McCuen, 2016). 9.2 Infiltration An early empirical equation describing infiltration rate into soils was developed by Horton in 1939, which takes the form of Equation (9.1). \\[\\begin{equation} f_p~=~ f_c + \\left(f_0 - f_c\\right)e^{-kt} \\tag{9.1} \\end{equation}\\] This describes a potential infiltration rate, \\(f_p\\), beginning at a maximum \\(f_0\\) and decreasing with time toward a minimum value \\(f_c\\) at a rate described by the decay constant \\(k\\). \\(f_c\\) is also equal to the saturated hydraulic conductivity, \\(K_s\\), of the soil. If rainfall rate exceeds \\(f_c\\) then this equation describes the actual infiltration rate with time. If periods of time have rainfall less intense than \\(f_c\\) it is convenient to integrate this to relate the total cumulative depth of water infiltrated, \\(F\\), and the actual infiltration rate, \\(f_p\\), as in Equation (9.2). \\[\\begin{equation} F~=~\\left[\\frac{f_c}{k}ln\\left(f_0-f_c\\right)+\\frac{f_0}{k}\\right]-\\frac{f_c}{k}ln\\left(f_p-f_c\\right)-\\frac{f_p}{k} \\tag{9.2} \\end{equation}\\] A more physically based relationship to describe infiltration rate is the Green-Ampt model. It is based on the physical laws describing the propogation of a wetting front downward through a soil column under a ponded water surface. The Green-Ampt relationship is in Equation (9.3). \\[\\begin{equation} K_st~=~F-\\left(n-\\theta_i\\right)\\Phi_f~ln\\left[1+\\frac{F}{\\left(n-\\theta_i\\right)\\Phi_f}\\right] \\tag{9.3} \\end{equation}\\] Equation (9.3) assumes ponding begins at t=0, meaning rainfall rate exceeds \\(K_s\\). When rainfall rates are less than that, adjustments to the method are used. Parameters are shown in the table below. Figure 9.2: Green-Ampt Parameter Estimates and Ranges based on Soil Texture USACE While not demonstrated here, parameters for the Horton and Green-Ampt methods can be derived from observed infiltration data using the R package vadose. The most widely used method for estimating infiltration is the NRCS method, described in detail in the NRCS document Estimating Runoff Volume and Peak Discharge.This method describes the direct runoff (as a depth), \\(Q\\), resulting from a precipitation event, \\(P\\), as in Equation (9.4). \\[\\begin{equation} Q~=~\\frac{\\left(P-I_a\\right)^2}{\\left(P-I_a\\right)+S} \\tag{9.4} \\end{equation}\\] \\(S\\) is the maximum retention of water by the soil column and \\(I_a\\) is the initial abstraction, commonly estimated as \\(I_a=0.2S\\). Substituting this into Equation (9.4) produces Equation (9.5). \\[\\begin{equation} Q~=~\\frac{\\left(P-0.2~S\\right)^2}{\\left(P+0.8~S\\right)} \\tag{9.5} \\end{equation}\\] This relationship applies as long as \\(P>0.2~S\\); Q=0 otherwise. Values for S are derived from a Curve Number (CN), which summarizes the land cover, soil type and condition: \\[CN=\\frac{1000}{10+S}\\], where \\(S\\), and subsequently \\(Q\\), are in inches. Equation (9.5) can be rearranged to a form similar to those for the Horton and Green-Ampt equations for cumulative infiltration, \\(F\\). \\[F~=~\\frac{\\left(P-0.2~S\\right)S}{P+0.8~S}\\]. 9.3 Evaporation Evaporation is simply the change of water from liquid to vapor state. Because it is difficult to separate evaporation from the soil from transpiration from vegetation, it is usually combined into Evapotranspiration, or ET; see Figure 9.3. Figure 9.3: Schematic of ET, from CIMIS ET can be estimated in a variety of ways, but it is important first to define three types of ET: - Potential ET, \\(ET_p\\) or \\(PET\\): essentially the same as the rate that water would evaporate from a free water surface. - Reference crop ET, \\(ET_{ref}\\) or \\(ET_0\\): the rate water evaporates from a well-watered reference crop, usually grass of a standard height. - Actual ET, \\(ET\\): this is the water used by a crop or other vegetation, usually calculated by adjusting the \\(ET_0\\) term by a crop coefficient that accounts for factors such as the plant height, growth stage, and soil exposure. Estimating \\(ET_0\\) can be as uncomplicated as using the Thornthwaite equation, which depends only on mean monthly temperatures, to the Penman-Monteith equation, which includes solar and longwave radiation, wind and humidity effects, and reference crop (grass) characteristics. Inclusion of more complexity, especially where observations can supply the needed input, produces more reliable estimates of \\(ET_0\\).One of the most common implementations of the Penman-Monteith equation is the version of the FAO (FAO Irrigation and drainage paper 56, or FAO56) (Allen & United Nations, 1998). Refer to FAO56 for step-by-step instructions on determining each term in the Penman-Monteith equation, Equation (9.6). \\[\\begin{equation} \\lambda~ET~=~\\frac{\\Delta\\left(R_n-G\\right)+\\rho_ac_p\\frac{\\left(e_s-e_a\\right)}{r_a}}{\\Delta+\\gamma\\left(1+\\frac{r_s}{r_a}\\right)} \\tag{9.6} \\end{equation}\\] Open water evaporation can be calculated using the original Penman equation (1948): \\[\\lambda~E_p~=~\\frac{\\Delta~R_n+\\gamma~E_a}{\\Delta~+~\\gamma}\\] where \\(R_n\\) is the net radiation available to evaporate water and \\(E_a\\) is a mass transfer function usually including humidity (or vapor pressure deficit) and wind speed. \\(\\lambda\\) is the latent heat of vaporization of water. A common implementation of the Penman equation is \\[\\begin{equation} \\lambda~E_p~=~\\frac{\\Delta~R_n+\\gamma~6.43\\left(1+0.536~U_2\\right)\\left(e_s-e\\right)}{\\Delta~+~\\gamma} \\tag{9.7} \\end{equation}\\] Here \\(E_p\\) is in mm/d, \\(\\Delta\\) and \\(\\gamma\\) are in \\(kPa~K^{-1}\\), \\(R_n\\) is in \\(MJ~m^{−2}~d^{−1}\\), \\(U_2\\) is in m/s, and \\(e_s\\) and \\(e\\) are in kPa. Variables are as defined in FAO56. Open water evaporation can also be calculated using a modified version of the Penman-Monteith equation (9.6). In this latter case, vegetation coefficients are not needed, so Equation (9.6) can be used with \\(r_s=0\\) and \\(r_a=251/(1+0.536~u_2)\\), following Thom & Oliver, 1977. The R package Evaporation has functions to calculate \\(ET_0\\) using this and many other functions. This is especially useful when calculating PET over many points or through a long time series. 9.4 Snow 9.4.1 Observations In mountainous areas a substantial portion of the precipitation may fall as snow, where it can be stored for months before melting and becoming runoff. Any hydrologic analysis in an area affected by snow must account for the dynamics of this natural reservoir and how it affects water supply. In the Western U.S., the most comprehensive observations of snow are part of the SNOTEL (SNOw TELemetry) network. Figure 9.4: The SNOTEL network. 9.4.2 Basic snowmelt theory and simple models For snow to melt, heat must be added to first bring the snowpack to the melting point; it takes about 2 kJ/kg to increase snowpack temperature 1\\(^\\circ\\)C. Additional heat is required for the phase change from ice to water (the latent heat of fusion), about 335 kJ/kg. Heat can be provided by absorbing solar radiation, longwave radiation, ground heat, warm air, warm rain falling on the snowpack or water vapor condensing on the snow. Once snow melts, it can percolate through the snowpack and be retained, similar to water retained by soil, and may re-freeze (releasing the latent heat of fusion, which can then cause more melt). As with any other hydrologic process, there are many ways it can be modeled, from simplified empirical relationships to complex physics-based representations. While accounting for all of the many processes involved would be a robust approach, often there are not adequate observations to support their use so simpler parameterization are used. Here only the simplest index-based snow model is discussed, as in Equation (9.8). \\[\\begin{equation} M~=~K_d\\left(T_a~-~T_b\\right) \\tag{9.8} \\end{equation}\\] M is the melt rate in mm/d (or in/day), \\(T_a\\) is air temperature (sometimes a daily mean, sometimes a daily maximum), \\(T_b\\) is a base temperature, usually 0\\(^\\circ\\)C (or 32\\(^\\circ\\)F), and \\(K_d\\) is a degree-day melt factor in mm/d/\\(^\\circ\\)C (or in/d/\\(^\\circ\\)F). The melt factor, \\(K_d\\), is highly dependent on local conditions and on the time of year (as an indicator of the snow pack condition); different \\(K_d\\) factors can be used for different months for example. Refreezing of melted snow, when temperatures are below \\(T_b\\), can also be estimated using an index model, such as Equation (9.9). \\[\\begin{equation} Fr~=~K_f\\left(T_b~-~T_a\\right) \\tag{9.9} \\end{equation}\\] Importantly, temperature-index snowmelt relations have been developed primarily for describing snowmelt at the end of season, after the peak of snow accumulation (typically April-May in the mountainous western U.S.), and their use during the snow accumulation season may overestimate melt. Different degree-day factors are often used, with the factors increasing later in the melt season. From a hydrologic perspective, the most important snow quality is the snow water equivalent (SWE), which is the depth of water obtained by melting the snow. An example of using a snowmelt index model follows. Example 9.1 Manually calibrate an index snowmelt model for a SNOTEL site using one year of data. Visit the SNOTEL to select a site. In this example site 1050, Horse Meadow, located in California, is used. Next download the data using the snotelr package (install the package first, if needed). sta <- "1050" snow_data <- snotelr::snotel_download(site_id = sta, internal = TRUE) Plot the data to assess the period available and how complete it is. plot(as.Date(snow_data$date), snow_data$snow_water_equivalent, type = "l", xlab = "Date", ylab = "SWE (mm)") Figure 9.5: Snow water equivalent at SNOTEL site 1050. Note the units are SI. If you download data directly from the SNOTEL web site the data would be in conventional US units. snotelr converts the data to SI units as it imports. The package includes a function snotel_metric that could be used to convert raw data downloaded from the SNOTEL website to SI units. For this exercise, extract a single (water) year, meaning from 1-Oct to 30-Sep, so an entire winter is in one year. In addition, create a data frame that only includes columns that are needed. snow_data_subset <- subset(snow_data, as.Date(date) > as.Date("2008-10-01") & as.Date(date) < as.Date("2009-09-30")) snow_data_sel <- subset(snow_data_subset, select=c("date", "snow_water_equivalent", "precipitation", "temperature_mean", "temperature_min", "temperature_max")) plot(as.Date(snow_data_sel$date),snow_data_sel$snow_water_equivalent, type = "l",xlab = "Date", ylab = "SWE (mm)") grid() Figure 9.6: Snow water equivalent at SNOTEL site 1050 for water year 2009. Now use a snow index model to simulate the SWE based on temperature and precipitation. The model used here is a modified version of that used in the hydromad package. The snow.sim command is used to run a snow index model; type ?hydromisc::snow.sim for details on its use. As a summary, the four main parameters you can adjust in the calibration of the model are: The maximum air temperature for snow, Tmax. Snow can fall at air temperatures above as high as about 3\\(^\\circ\\)C, but Tmax is usually lower. The minimum air temperature for rain, Tmin. Rain can fall when near surface air temperatures are below freezing. This may be as low as -1\\(^\\circ\\)C or maybe just a little lower, and as high as 1\\(^\\circ\\)C. Base temperature, Tmelt, the temperature at which melt begins. Usually the default of 0\\(^\\circ\\)C is used, but some adjustment (generally between -2 and 2\\(^\\circ\\)C) can be applied to improve model calibration. Snow Melt (Degree-Day) Factor, kd, which describes the melting of the snow when temperatures are above freezing. Be careful using values from different references as these are dependent on units. Typical values are between 1 and 5 mm/d/\\(^\\circ\\)C. Two additional parameters are optional; their effects are typically small. Degree-Day Factor for freezing, kf, of liquid water in the snow pack when temperatures are below freezing. By default it is set to 1\\(^\\circ\\)C/mm/day, and may vary from 0 to 2 \\(^\\circ\\)C/mm/day. Snow water retention factor, rcap. When snow melts some of it can be retained via capillarity in the snow pack. It can re-freeze or drain out. This is expressed as a fraction of the snow pack that is frozen. The default is 2.5% (rcap = 0.025). Start with some assumed values and run the snow model. Tmax_snow <- 3 Tmin_rain <- 2 kd <- 1 snow_estim <- hydromisc::snow.sim(DATA=snow_data_sel, Tmax=Tmax_snow, Tmin=Tmin_rain, kd=kd) Now the simulated values can be compared to the observations. If not installed already, install the hydroGOF package, which has some useful functions for evaluating how well modeled output fits observations. In the plot that follows we specify three measures of goodness-of-fit: Mean Absolute Error (MAE) Percent Bias (PBIAS) Root Mean Square Error divided by the Standard Deviation (RSR) These are discussed in detail in other references, but the aim is to calibrate (change the input parameters) until these values are low. obs <- snow_data_sel$snow_water_equivalent sim <- snow_estim$swe_simulated hydroGOF::ggof(sim, obs, na.rm = TRUE, dates=snow_data_sel$date, gofs=c("MAE", "RMSE", "PBIAS"), xlab = "", ylab="SWE, mm", tick.tstep="months", cex=c(0,0),lwd=c(2,2)) Figure 9.7: Simulated and Observed SWE at SNOTEL site 1050 for water year 2009. Melt is overestimated in the early part of the year and underestimated during the melt season, showing why a single index is not a very robust model. Applying two kd values, one for early to mid snow season and another for later snowmelt could improve the model, but it would make it less useful for using the model in other situations such as increased temperatures. 9.4.3 Snow model calibration While manual model calibration can improve the fit, a more complete calibration involves optimization methods that search the parameter space for the optimal combination of parameter values. A useful tool for doing that is the optim function, part of the stats package installed with base R. Using the optimization package requires establishing a function that should be minimized, where the parameters to be included in the optimization are the first argument. The optim function requires you to explicitly give ranges over which parameters can be varied, via the upper and lower arguments. An example of this follows, where the four main model parameters noted above are used, and the MAE is minimized. fcn_to_minimize <- function(par,datain, obs){ snow_estim <- hydromisc::snow.sim(DATA=datain, Tmax=par[1], Tmin=par[2], kd=par[3], Tmelt=par[4]) calib.stats <- hydroGOF::gof(snow_estim$swe_simulated,obs,na.rm=TRUE) objective_stat <- as.numeric(calib.stats['MAE',]) return(objective_stat) } opt_res <- optim(par=c(0.5,1,1,0),fn=fcn_to_minimize, lower=c(-1,-1,0.5,-2), upper=c(3,1,5,3), method="L-BFGS-B", datain=snow_data_sel, obs=obs) #print out optimal parameters - note Tmax and Tmin can be reversed during optimization cat(sprintf("Optimal parameters:\\nTmax=%.1f\\nTmin=%.1f\\nkd=%.2f\\nTmelt=%.1f\\n", max(opt_res$par[1],opt_res$par[2]),min(opt_res$par[1],opt_res$par[2]), opt_res$par[3],opt_res$par[4])) #> Optimal parameters: #> Tmax=1.0 #> Tmin=0.5 #> kd=1.05 #> Tmelt=-0.0 The results using the optimal parameters can be plotted to visualize the simulation. snow_estim_opt <- hydromisc::snow.sim(DATA=snow_data_sel, Tmax=max(opt_res$par[1],opt_res$par[2]), Tmin=min(opt_res$par[1],opt_res$par[2]), kd=opt_res$par[3], Tmelt=opt_res$par[4]) obs <- snow_data_sel$snow_water_equivalent sim <- snow_estim_opt$swe_simulated hydroGOF::ggof(sim, obs, na.rm = TRUE, dates=snow_data_sel$date, gofs=c("MAE", "RMSE", "PBIAS"), xlab = "", ylab="SWE, mm", tick.tstep="months", cex=c(0,0),lwd=c(2,2)) Figure 9.8: Optimal simulation of SWE at SNOTEL site 1050 for water year 2009. It is clear that a simple temperature index model cannot capture the snow dynamics at this location, especially during the winter when melt is significantly overestimated. 9.4.4 Estimating climate change impacts on snow Once a reasonable calibration is obtained, the effect of increasing temperatures on SWE can be simulated by including the deltaT argument in the hydromisc::snow.sim command. Here a 3\\(^\\circ\\)C uniform temperature increase is imposed on the optimal parameterization above. dT <- 3.0 snow_plus3 <- hydromisc::snow.sim(DATA=snow_data_sel, Tmax=max(opt_res$par[1],opt_res$par[2]), Tmin=min(opt_res$par[1],opt_res$par[2]), kd=opt_res$par[3], Tmelt=opt_res$par[4], deltaT = dT) simplusdT <- snow_plus3$swe_simulated # plot the results dTlegend <- expression("Simulated"*+3~degree*C) plot(as.Date(snow_data_sel$date),obs,type = "l",xlab = "", ylab = "SWE (mm)") lines(as.Date(snow_estim$date),sim,lty=2,col="blue") lines(as.Date(snow_estim$date),simplusdT,lty=3,col="red") legend("topright", legend = c("Observed", "Simulated",dTlegend), lty = c(1,2,3), col=c("black","blue","red")) grid() Figure 9.9: Observed SWE and simulated with observed meteorology and increased temperatures. 9.5 Watershed analysis Whether precipitation falls as rain or snow, how much is absorbed by plants, consumed by evapotranspiration, and what is left to become runoff, is all determined by watershed characteristics. This can include: Watershed area Slope of terrain Elevation variability (a hypsometric curve) Soil types Land cover Collecting this information begins with obtaining a digital elevation model for an area, identifying any key point or points on a stream (a watershed outlet), and then delineating the area that drains to that point. This process of watershed delineation is often done with GIS software like ArcGIS or QGIS. The R package WhiteboxTools provides capabilities for advanced terrain analysis in R. Demonstrations of the use of these tools for a watershed are in the online book Hydroinformatics at VT by JP Gannon. In particular, the chapters on mapping a stream network and delineating a watershed are excellent resources for exploring these capabilities in R. For watersheds in the U.S., watersheds, stream networks, and attributes of both can be obtained and viewed using nhdplusTools. Land cover and soil information can be obtained using the FedData package. "],["designing-for-floods-flood-hydrology.html", "Chapter 10 Designing for floods: flood hydrology 10.1 Engineering design requires probability and statistics 10.2 Estimating floods when you have peak flow observations - flood frequency analysis 10.3 Estimating floods from precipitation", " Chapter 10 Designing for floods: flood hydrology Figure 10.1: The international bridge between Fort Kent, Maine and Clair, New Brunswick during a flood (source: NOAA) Flood hydrology is generally the description of how frequently a flood of a certain level will be exceeded in a specified period. This was discussed briefly in the section on precipitation frequency, Section 8.2. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 10.1 Engineering design requires probability and statistics Before diving into peak flow analysis, it helps to refresh your background in basic probability and statistics. Some excellent resources for this using R as the primary tool are: A very brief tutorial, by Prof. W.B. King A more thorough text, by Prof. G. Jay Kerns. This has a companion R package. An online flood hydrology reference based in R, by Prof. Helen Fairweather Rather than repeat what is in those references, a couple of short demonstrations here will show some of the skills needed for flood hydrology. The first example illustrates binomial probabilities, which are useful for events with only two possible outcomes (e.g., a flood happens or it doesn’t), where each outcome is independent and probabilities of each are constant. R functions for distributions use a first letter to designate what it returns: d is the density, p is the (cumulative) distribution, q is the quantile, r is a random sequence. In R the defaults for probabilities are to define them as \\(P[X~\\le~x]\\), or a probability of non-exceedance. Recall that a probability of exceedance is simply 1 - (probability of non-exceedance), or \\(P[X~\\gt~x] ~=~ 1-P[X~\\le~x]\\). In R, for quantiles or probabilities (using functions beginning with q or p like pnorm or qlnorm) setting the argument lower.tail to FALSE uses a probability of exceedance instead of non-exceedance. Example 10.1 A temporary dam is constructed while a repair is built. It will be in place 5 years and is designed to protect against floods up to a 20-year recurrence interval (i.e., there is a \\(p=\\frac{1}{20}=0.05\\), or 5% chance, that it will be exceeded in any one year). What is the probability of (a) no failure in the 5 year period, and (b) at least two failures in 5 years. # (a) ans1 <- dbinom(0, 5, 0.05) cat(sprintf("Probability of exactly zero occurrences in 5 years = %.4f %%",100*ans1)) #> Probability of exactly zero occurrences in 5 years = 77.3781 % # (b) ans2 <- 1 - pbinom(1,5,.05) # or pbinom(1,5,.05, lower.tail=FALSE) cat(sprintf("Probability of 2 or more failures in 5 years = %.2f %%",100*ans2)) #> Probability of 2 or more failures in 5 years = 2.26 % While the next example uses normally distributed data, most data in hydrology are better described by other distributions. Example 10.2 Annual average streamflows in some location are normally distributed with a mean annual flow of 20 m\\(^3\\)/s and a standard deviation of 6 m\\(^3\\)/s. Find (a) the probability of experiencing a year with less than (or equal to) 10 m\\(^3\\)/s, (b) greater than 32 m\\(^3\\)/s, and (c) the annual average flow that would be expected to be exceeded 10% of the time. # (a) ans1 <- pnorm(10, mean=20, sd=6) cat(sprintf("Probability of less than 10 = %.2f %%",100*ans1)) #> Probability of less than 10 = 4.78 % # (b) ans2 <- pnorm(32, mean=20, sd=6, lower.tail = FALSE) #or 1 - pnorm(32, mean=20, sd=6) cat(sprintf("Probability of greater than or equal to 30 = %.2f %%",100*ans2)) #> Probability of greater than or equal to 30 = 2.28 % # (c) ans3 <- qnorm(.1, mean=20, sd=6, lower.tail=FALSE) cat(sprintf("flow exceeded 10%% of the time = %.2f m^3/s",ans3)) #> flow exceeded 10% of the time = 27.69 m^3/s # plot to visualize answers x <- seq(0,40,0.1) y<- pnorm(x,mean=20,sd=6) xlbl <- expression(paste(Flow, ",", ~ m^"3"/s)) plot(x ,y ,type="l",lwd=2, xlab = xlbl, ylab= "Prob. of non-exceedance") abline(v=10,col="black", lwd=2, lty=2) abline(v=32,col="blue", lwd=2, lty=2) abline(h=0.9,col="green", lwd=2, lty=2) legend("bottomright",legend=c("(a)","(b)","(c)"),col=c("black","blue","green"), cex=0.8, lty=2) Figure 10.2: Illustration of three solutions. 10.2 Estimating floods when you have peak flow observations - flood frequency analysis For an area fortunate enough to have a long record (i.e., several decades or more) of observations, estimating flood risk is a matter of statistical data analysis. In the U.S., data, collected by the U.S. Geological Survey (USGS), can be accessed through the National Water Dashboard. Sometimes for discontinued stations it is easier to locate data through the older USGS map interface. For any site, data may be downloaded to a file, and the peakfq (watstore) format, designed to be imported into the PeakFQ software, is easy to work with in R. 10.2.1 Installing helpful packages The USGS has developed many R packages, including one for retrieval of data, dataRetrieval. Since this resides on CRAN, the package can be installed with (the use of ‘!requireNamespace’ skips the installation if it already is installed): if (!requireNamespace("dataRetrieval", quietly = TRUE)) install.packages("dataRetrieval") Other USGS packages that are very helpful for peak flow analysis are not on CRAN, but rather housed in a USGS repository. The easiest way to install packages from that archive is using the install.load package. Then the install_load command will first search the standard CRAN archive for the package, and if it is not found there the USGS archive is searched. Packages are also loaded (equivalent to using the library command). install_load also installs dependencies of packages, so here installing smwrGraphs also installs smwrBase. The prefix smwr refers to their use in support of the excellent reference Statistical Methods in Water Resources. if (!requireNamespace("install.load", quietly = TRUE)) install.packages("install.load") install.load::install_load("smwrGraphs") #this command also installs smwrBase Lastly, the lmomco package has extensive capabilities to work with many forms of probability distributions, and has functions for calculating distribution parameters (like skew) that we will use. if (!requireNamespace("lmomco", quietly = TRUE)) install.packages("lmomco") 10.2.2 Download, manipulate, and plot the data for a site Using the older USGS site mapper, and specifying that inactive stations should also be included, many stations in the south Bay Area in California are shown in Figure 10.3. Figure 10.3: Active and Inactive USGS sites recording peak flows. While the data could be downloaded and saved locally through that link, it is convenient here to use the dataRetrieval command. Qpeak_download <- dataRetrieval::readNWISpeak(siteNumbers="11169000") The data used here are also available as part of the hydromisc package, and may be obtained by typing hydromisc::Qpeak_download. It is always helpful to look at the downloaded data frame before doing anything with it. There are many columns that are not needed or that have repeated information. There are also some rows that have no data (‘NA’ values). It is also useful to change some column names to something more intuitive. We will need to define the water year (a water year begins October 1 and ends September 30). Qpeak <- Qpeak_download[!is.na(Qpeak_download$peak_dt),c('peak_dt','peak_va')] colnames(Qpeak)[colnames(Qpeak)=="peak_dt"] <- "Date" colnames(Qpeak)[colnames(Qpeak)=="peak_va"] <- "Peak" Qpeak$wy <- smwrBase::waterYear(Qpeak$Date) The data have now been simplified so that can be used more easily in the subsequent flood frequency analysis. Data should always be plotted, which can be done many ways. As a demonstration of highlighting specific years in a barplot, the strongest El Niño years (in 1930-2002) from NOAA Physical Sciences Lab can be highlighted in red. xlbl <- "Water Year" ylbl <- expression("Peak Flow, " ~ ft^{3}/s) nino_years <- c(1983,1998,1992,1931,1973,1987,1941,1958,1966, 1995) cols <- c("blue", "red")[(Qpeak$wy %in% nino_years) + 1] barplot(Qpeak$Peak, names.arg = Qpeak$wy, xlab = xlbl, ylab=ylbl, col=cols) Figure 10.4: Annual peak flows for USGS gauge 11169000, highlighting strong El Niño years in red. 10.2.3 Flood frequency analysis The general formula used for flood frequency analysis is Equation (10.1). \\[\\begin{equation} y=\\overline{y}+Ks_y \\tag{10.1} \\end{equation}\\] where y is the flow at the designated return period, \\(\\overline{y}\\) is the mean of all \\(y\\) values and \\(s_y\\) is the standard deviation. In most instances, \\(y\\) is a log-transformed flow; in the US a base-10 logarithm is generally used. \\(K\\) is a frequency factor, which is a function of the return period, the parent distribution, and often the skew of the y values. The guidance of the USGS (as in Guidelines for Determining Flood Flow Frequency, Bulletin 17C) (England, J.F. et al., 2019) is to use the log-Pearson Type III (LP-III) distribution for flood frequency data, though in different settings other distributions can perform comparably. For using the LP-III distribution, we will need several statistical properties of the data: mean, standard deviation, and skew, all of the log-transformed data, calculated as follows. mn <- mean(log10(Qpeak$Peak)) std <- sd(log10(Qpeak$Peak)) g <- lmomco::pmoms(log10(Qpeak$Peak))$skew With those calculated, a defined return period can be chosen and the flood frequency factors, from Equation (10.1), calculated for that return period (the example here is for a 50-year return period). The qnorm function from base R and the qpearsonIII function from the smwrBase package make this straightforward, and K values for Equation (10.1) are obtained for a lognormal, Knorm, and LP-III, Klp3. RP <- 50 Knorm <- qnorm(1 - 1/RP) Klp3 <- smwrBase::qpearsonIII(1-1/RP, skew = g) Now the flood frequency equation (10.1) can be applied to calculate the flows associated with the 50-year return period for each of the distributions. Remember to take the anti-log of your answer to return to standard units. Qpk_LN <- mn + Knorm * std Qpk_LP3 <- mn + Klp3 * std sprintf("RP = %d years, Qpeak LN = %.0f cfs, Qpeak LP3 = %.0f",RP,10^Qpk_LN,10^Qpk_LP3) #> [1] "RP = 50 years, Qpeak LN = 18362 cfs, Qpeak LP3 = 12396" 10.2.4 Creating a flood frequency plot Different probability distributions can produce very different results for a design flood flow. Plotting the historical observations along with the distributions, the lognormal and LP-III in this case, can help explain why they differ, and provide indications of which fits the data better. We cannot say exactly what the probability of seeing any observed flood exceeded would be. However, given a long record, the probability can be described using the general “plotting position” equation from Bulletin 17C, as in Equation (10.2). \\[\\begin{equation} p_i=\\frac{i-a}{n+1-2a} \\tag{10.2} \\end{equation}\\] where n is the total number of data points (annual peak flows in this case), \\(p_i\\) is the exceedance probability of flood observation i, where flows are ranked in descending order (so the largest observed flood has \\(i=1\\) ad the smallest is \\(i=n\\)). The parameter a is between 0 and 0.5. For simplicity, the following will use \\(a=0\\), so the plotting Equation (10.2) becomes the Weibull formula, Equation (10.3). \\[\\begin{equation} p_i=\\frac{i}{n+1} \\tag{10.3} \\end{equation}\\] While not necessary, to add probabilities to the annual flow sequence we will create a new data frame consisting of the observed peak flows, sorted in descending order. df_pp <- as.data.frame(list('Obs_peak'=sort(Qpeak$Peak,decreasing = TRUE))) This can be done with fewer commands, but here is an example where first a rank column is created (1=highest peak in the record of N years), followed by adding columns for the exceedance and non-exceedence probabilities: df_pp$rank <- as.integer(seq(1:length(df_pp$Obs_peak))) df_pp$exc_prob <- (df_pp$rank/(1+length(df_pp$Obs_peak))) df_pp$ne_prob <- 1-df_pp$exc_prob For each of the non-exceedance probabilities calculated for the observed peak flows, use the flood frequency equation (10.1) to estimate the peak flow that would be predicted by a lognormal or LP-III distribution. This is the same thing that was done above for a specified return period, but now it will be “applied” to an entire column. df_pp$LN_peak <- mapply(function(x) {10^(mn+std*qnorm(x))}, df_pp$ne_prob) df_pp$LP3_peak <- mapply(function(x) {10^(mn+std*smwrBase::qpearsonIII(x, skew=g))},df_pp$ne_prob) There are many packages that create probability plots (see, for example, the versatile scales package for ggplot2). For this example the USGS smwrGraphs package is used. First, for aesthetics, create x- and y- axis labels. ylbl <- expression("Peak Flow, " ~ ft^{3}/s) xlbl <- "Non-exceedence Probability" The smwrGraphs package works most easily if it writes output directly to a file, a PNG file in this case, using the setPNG command; the file name and its dimensions in inches are given as arguments, and the PNG device is opened for writing. This is followed by commands to plot the data on a graph. Technically, the data are plotted to an object here is called prob.pl. The probPlot command plots the observed peaks as points, where the alpha argument is the a in Equation (10.2). Additional points or lines are added with the addXY command, used here to add the LN and LP3 data as lines (one solid, one dashed). Finally, a legend is added (the USGS refers to that as an “Explanation”), and the output PNG file is closed with the dev.off() command. smwrGraphs::setPNG("probplot_smwr.png",6.5, 3.5) #> width height #> 6.5 3.5 #> [1] "Setting up markdown graphics device: probplot_smwr.png" prob.pl <- smwrGraphs::probPlot(df_pp$Obs_peak, alpha = 0.0, Plot=list(what="points",size=0.05,name="Obs"), xtitle=xlbl, ytitle=ylbl) prob.pl <- smwrGraphs::addXY(df_pp$ne_prob,df_pp$LN_peak,Plot=list(what="lines",name="LN"),current=prob.pl) prob.pl <- smwrGraphs::addXY(df_pp$ne_prob,df_pp$LP3_peak,Plot=list(what="lines",type="dashed",name="LP3"),current=prob.pl) smwrGraphs::addExplanation(prob.pl,"ul",title="") dev.off() #> png #> 2 The output won’t be immediately visible in RStudio – navigate to the file and click on it to view it. Figure 10.5 shows the output from the above commands. Figure 10.5: Probability plot for USGS gauge 11169000 for years 1930-2002. 10.2.5 Other software for peak flow analysis Much of the analysis above can be achieved using the PeakFQ software developed by the USGS. It incorporates the methods in Bulletin 17C via a graphical interface and can import data in the watstore format as discussed above in Section 10.2. The USGS has also produced the MGBT R package to perform many of the statistical calculations involved in the Bulletin 17C procedures. 10.3 Estimating floods from precipitation When extensive streamflow data are not available, flood risk can be estimated from precipitation and the characteristics of the area contributing flow to a point. While not covered here (or not yet…), there has been extensive development of hydrological modeling using R, summarized in recent papers (Astagneau et al., 2021; Slater et al., 2019). Straightforward application of methods to estimate peak flows or hydrographs resulting from design storms can by writing code to apply the Rational Formula (included in the VFS and hydRopUrban packages, for example) or the NRCS peak flow method. For more sophisticated analysis of water supply and drought, continuous modeling is required. A very good introduction to hydrological modeling in R, including model calibration and assessment, is included in the Hydroinformatics at VT reference by JP Gannon. "],["sustainability-in-design-planning-for-change.html", "Chapter 11 Sustainability in design: planning for change 11.1 Perturbing a system 11.2 Detecting changes in hydrologic data 11.3 Detecting changes in extreme events", " Chapter 11 Sustainability in design: planning for change Figure 11.1: Yearly surface temperature compared to the 20th-century average from 1880–2022, from Climate.gov All systems engineered to last more than a decade or two, so everything civil engineers work on, will need to be designed to be resilient to dramatic environmental changes. As societies respond to the impacts of a disrupted climate, demands for water, energy, housing, food, and other essential services will change. This will result in economic disruption as well. This chapter presents a few ways long-term sustainability can be considered, looking at sensitivity of systems, detection of shifts or trends, and how economic and management may respond. This is much more brief than this rich topic deserves. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 11.1 Perturbing a system when a system is perturbed it can respond in many ways. A useful classification of these was developed by (Marshall & Toffel, 2005). Figure 11.2 is an adaptation of Figure 2 from that paper. Figure 11.2: Pathways of recovery or degradation a system may take after initial perturbation. In essence, after a system is degraded, it can eventually rebound to its original condition (Type 1), rebound to some other state that is degraded from its original (Types 2 and 3), or completely collapse (Type 4). Which path it taken depends on the degree of the initial disruption and the ability of the system to recover. While originally cast with time as the x-axis, Figure 11.2 is equally applicable when looking at a system that travels over a distance, such as a flowing river. The form of the curves in Figure 11.2 appear similar to a classic dissolved oxygen sag curve, as in Figure 11.3. Figure 11.3: Dissolved oxygen levels in a steam following an input of waste (source: EPA). The Streeter-Phelps equation describes the response of the dissolved oxygen (DO) levels in a water body to a perturbation, such as the discharge of wastewater with a high oxygen demand. Some important assumptions are that steady-state conditions exist, and the flow moves as plug flow, progressing downstream along a one-dimensional path. Following is Streeter-Phelps Equation (11.1). \\[\\begin{equation} D=C_s-C=\\frac{K_1^\\prime L_0}{K_2^\\prime - K_1^\\prime}\\left(e^{-K_1^\\prime t}-e^{-K_2^\\prime t}\\right)+D_0e^{-K_2^\\prime t} \\tag{11.1} \\end{equation}\\] where \\(D\\) is the DO deficit, \\(C_s\\) is the saturation DO concentration, \\(C\\) is the DO concentration, \\(D_0\\) is the initial DO deficit, \\(L_0\\) is the ultimate (first-stage) BOD at the discharge, calculated by Equation (11.2). \\[\\begin{equation} L_0=\\frac{BOD_5}{1-e^{-K_1^\\prime t}} \\tag{11.2} \\end{equation}\\] \\(K_1^\\prime\\) and \\(K_2^\\prime\\) are the deoxygenation and reaeration coefficients, both adjusted for temperature. Usually the coefficients \\(K_1\\) and \\(K_2\\) are defined at 20\\(^\\circ\\)C, and then adjusted by empirical relationships for the actual water temperature using Equation (11.3). \\[\\begin{equation} K^\\prime = K\\theta ^{T-20} \\tag{11.3} \\end{equation}\\] where \\(\\theta\\) is set to typical values of 1.135 for \\(K_1\\) for \\(T\\le20^\\circ C\\) (and 1.056 otherwise) and 1.024 for \\(K_2\\). As a demonstration, functions (only available for SI units) in the hydromisc package can be used to explore the recovery of an aquatic system from a perturbation, as in Example 11.1. Example 11.1 A river with a flow of 7 \\(m^3/s\\) and a velocity of 1.4 m/s has effluent discharged into it at a rate of 1.5 \\(m^3/s\\). The river upstream of the discharge has a temperature of 15\\(^\\circ\\)C, a \\(BOD_5\\) of 1 mg/L, and a dissolved oxygen saturation of 90 percent. The effluent is 21\\(^\\circ\\)C with a \\(BOD_5\\) of 180 mg/L and a dissolved oxygen saturation of 0 percent. The deoxygenation rate constant (at 20\\(^\\circ\\)C) is 0.4 \\(d^{-1}\\), and the reaeration rate constant is 0.8 \\(d^{-1}\\). Create a plot of DO as a percent of saturation (y-axis) vs. distance in km (x-axis). First set up the model parameters. Q <- 7 # flow of stream, m3/s V <- 1.4 # velocity of stream, m/s Qeff <- 1.5 # flow rate of effluent, m3/s DOsatupstr <- 90 # DO saturation upstream of effluent discharge, % DOsateff <- 0 # DO saturation of effluent discharge, % Triv <- 15 # temperature of receiving water, C Teff <- 21 # temperature of effluent, C BOD5riv <- 1 # 5-day BOD of receiving water, mg/L BOD5eff <- 180 # 5-day BOD of effluent, mg/L K1 <- 0.4 # deoxygenation rate constant at 20C, 1/day K2 <- 0.8 # reaeration rate constant at 20C, 1/day Calculate some of the variables needed for the Streeter-Phelps model. Type ?hydromisc::DO_functions for more information on the DO-related functions in the hydromisc package. Tmix <- hydromisc::Mixture(Q, Triv, Qeff, Teff) K1adj <- hydromisc::Kadj_deox(K1=K1, T=Tmix) K2adj <- hydromisc::Kadj_reox(K2=K2, T=Tmix) BOD5mix <- hydromisc::Mixture(Q, BOD5riv, Qeff, BOD5eff) L0 <- BOD5mix/(1-exp(-K1adj*5)) #BOD5 - ultimate Find the dissolved oxygen for 100 percent saturation (assuming no salinity) and the initial DO deficit at the point of discharge. Cs <- hydromisc::O2sat(Tmix) #DO saturation, mg/l C0 <- hydromisc::Mixture(Q, DOsatupstr/100.*Cs, Qeff, DOsateff/100.*Cs) #DO init, mg/l D0 <- Cs - C0 #initial deficit Determine a set of distances where the DO deficit will be calculated, and the corresponding times for the flow to travel that distance. xs <- seq(from=0.5, to=800, by=5) ts <- xs*1000/(V*86400) Finally, calculate the DO (as a percent of saturation) and plot the results. DO_def <- hydromisc::DOdeficit(t=ts, K1=K1adj, K2=K2adj, L0=L0, D0=D0) DO_mgl <- Cs - DO_def DO_pct <- 100*DO_mgl/Cs plot(xs,DO_pct,xlim=c(0,800),ylim=c(0,100),type="l",xlab="Distance, km",ylab="DO, %") grid() Figure 11.4: Dissolved oxygen for this example. For this example, the saturation DO concentration is 9.9 mg/L, meaning the minimum value of the curve corresponds to about 4 mg/L. The EPA notes that values this low are below that recommended for the protection of aquatic life in freshwater. This shows that while the ecosystem has not collapsed, (i.e., following a Type 4 curve in Figure 11.2), effective ecosystem functions may be lost. 11.2 Detecting changes in hydrologic data Planning for decades or more requires the ability to determine whether changes are occurring or have already occurred. Two types of changes will be considered here: step changes, caused by an abrupt change such as deforestation or a new pollutant source, and monotonic (either always increasing or decreasing) trends, caused by more gradual shifts. these are illustrated in Figures 11.5 and 11.6. Figure 11.5: A shift in phosphorus concentrations (source: USGS Scientific Investigations Report 2017-5006, App. 4, https://doi.org/10.3133/sir20175006). Figure 11.6: A trend in annual peak streamflow. (source: USGS Professional Paper 1869, https://doi.org/10.3133/pp1869). Before performing calculations related to trend significance, refer to Chapter 4 of Statistical Methods in Water Resources (Helsel, D.R. et al., 2020) to review the relationship between hypothesis testing and statistical significance. Figure 11.7 from that reference illustrates this. Figure 11.7: Four possible results of hypothesis testing. (source: Helsel et al., 2020). In the context of the example that follows, the null hypothesis, H0, is usually a statement that no trend exists. The \\(\\alpha\\)-value (the significance level) is the probability of incorrectly rejecting the null hypothesis, that is rejecting H0 when it is in fact true. The significance level that is acceptable is a decision that must be made – a common value of \\(\\alpha\\)=0.05 (5 percent) significance, also referred to as \\(1-\\alpha=0.95\\) (95 percent) confidence. A statistical test will produce a p-value, which is essentially the likelihood that the null hypothesis is true, or more technically, the probability of obtaining the calculated test statistic (on one more extreme) when the null hypothesis is true. Again, in the context of trend detection, small p-values (less than \\(\\alpha\\)) indicate greater confidence for rejecting the null hypothesis and thus supporting the existence of a “statistically significant” trend. One of the most robust impacts of a warming climate is the impact on snow. In California, historically the peak of snow accumulation tended to occur roughly on April 1 on average. To demonstrate methods for detecting changes data from the Four Trees Cooperative Snow Sensor site in California, obtained from the USDA National Water and Climate Center. These data are available as part of the hydromisc package. swe <- hydromisc::four_trees_swe plot(swe$Year, swe$April_1_SWE_in, xlab="Year", ylab="April 1 Snow Water Equivalent, in") lines(zoo::rollmean(swe, k=5), col="blue", lty=3, cex=1.4) Figure 11.8: April 1 snow water equivalent at Four Trees station, CA. The dashed line is a 5-year moving average. A plot is always useful – here a 5-year moving average, or rolling mean, is added (using the zoo package), to make any trends more observable. 11.2.1 Detecting a step change When there is a step change in a record, you need to test that the difference between the “before” and “after” conditions is large enough relative to natural variability that is can be confidently described as a change. In other words, whether the change is significant must be determined. This is done by breaking the data into two-samples and applying a statistical test, such as a t-test or the nonparametric rank-sum (or Mann-Whitney U) test. While for this example there is no obvious reason to break this data at any particular year, we’ll just look at the first and second halves. Separate the two subsets of years into two arrays of (y) values (not data frames in this case) and then create a boxplot of the two periods. yvalues1 <- swe$April_1_SWE_in[(swe$Year >= 1980) & (swe$Year <= 2001)] yvalues2 <- swe$April_1_SWE_in[(swe$Year >= 2002) & (swe$Year <= 2023)] boxplot(yvalues1,yvalues2,names=c("1980-2001","2002-2023"),boxwex=0.2,ylab="swe, in") Figure 11.9: Comparison of two records of SWE at Four Trees station, CA. Calculate the means and medians of the two periods, just for illustration. mean(yvalues1) #> [1] 19.76364 mean(yvalues2) #> [1] 15.44545 median(yvalues1) #> [1] 17.8 median(yvalues2) #> [1] 7.9 The mean for the later period is lower, as is the median. the question to pose is whether these differences are statistically significant. The following tests allow that determination. 11.2.1.1 Method 1: Using a t-test. A t-test determines the significance of a difference in the mean between two samples under a number of assumptions. These include independence of each data point (in this example, that any year’s April 1 SWE is uncorrelated with prior years) and that the data are normally distributed. This is performed with the t.test function. The alternative argument is included that the test is “two sided”; a one-sided test would test for one group being only greater than or less than the other, but here we only want to test whether they are different. The paired argument is set to FALSE since there is no correspondence between the order of values in each subset of years. t.test(yvalues1, yvalues2, var.equal = FALSE, alternative = "two.sided", paired = FALSE) #> #> Welch Two Sample t-test #> #> data: yvalues1 and yvalues2 #> t = 0.91863, df = 40.084, p-value = 0.3638 #> alternative hypothesis: true difference in means is not equal to 0 #> 95 percent confidence interval: #> -5.181634 13.817998 #> sample estimates: #> mean of x mean of y #> 19.76364 15.44545 Here the p-value is 0.36, a value much greater than \\(\\alpha = 0.05\\), so the null hypothesis cannot be rejected. The difference in the means is not significant based on this test. 11.2.1.2 Method 2: Wilcoxon rank-sum (or Mann-Whitney U) test. Like the t-test, the rank-sum test produces a p-value, but it measures a more generic measure of “central tendency” (such as a median) rather than a mean. Assumptions about independence of data are still necessary, but there is no requirement of normality of the distribution of data. It is less affected by outliers or a few extreme values than the t-test. This is performed with a standard R function. Other arguments are set as with the t-test. wilcox.test(yvalues1, yvalues2, alternative = "two.sided", paired=FALSE) #> Warning in wilcox.test.default(yvalues1, yvalues2, alternative = "two.sided", : #> cannot compute exact p-value with ties #> #> Wilcoxon rank sum test with continuity correction #> #> data: yvalues1 and yvalues2 #> W = 297, p-value = 0.1999 #> alternative hypothesis: true location shift is not equal to 0 The p-value is much lower than with the t-test, showing less influence of the two very high SWE values in the second half of the record. 11.2.2 Detecting a monotonic trend In a similar way to the step change, a monotonic trend can be tested using parameteric or non-parametric methods. Here we use the entire record to detect trends over the entire period. Linear regression may be used as a parametric method, which makes assumptions similar to the t-test (that residuals of the data are normally distributed). If the data do not conform to a normal distribution, the Mann-Kendall test can be applied, which is a non-parametric test. 11.2.2.1 Method 1: Regression To perform a linear regression in R, build a linear regression model (lm). This can take the swe data frame as input data, specifying the columns to relate linearly. m <- lm(April_1_SWE_in ~ Year, data = swe) summary(m)$coefficients #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 347.3231078 370.6915171 0.9369600 0.3541359 #> Year -0.1647357 0.1852031 -0.8894868 0.3788081 The row for “Year” provides the data on the slope. The slope shows SWE declines by 0.16 inches/year based on regression. The p-value for the slope is 0.379, much larger than the typical \\(\\alpha\\), meaning we cannot claim that a significant slope exists based on this test. So while a declining April 1 snowpack is observed at this location, it is not outside of the natural variability of the data based on a regression analysis. 11.2.2.2 Method 2: Mann-Kendall To conduct a Mann-Kendall trend test, additional packages need to be installed. There are a number available; what is shown below is one method. A non-parametric trend test (and plot) requires a few extra packages, which are installed like this: if (!require('Kendall', quietly = TRUE)) install.packages('Kendall') #> Warning: package 'Kendall' was built under R version 4.2.3 if (!require('zyp', quietly = TRUE)) install.packages('zyp') #> Warning: package 'zyp' was built under R version 4.2.3 Now the significance of the trend can be calculated. The slope associated with this test, the “Thiel-Sen slope”, is calculated using the zyp package. mk <- Kendall::MannKendall(swe$April_1_SWE_in) summary(mk) #> Score = -99 , Var(Score) = 9729 #> denominator = 934.4292 #> tau = -0.106, 2-sided pvalue =0.32044 ss <- zyp::zyp.sen(April_1_SWE_in ~ Year, data=swe) ss$coefficients #> Intercept Year #> 291.1637542 -0.1385452 The non-parametric slope shows April 1 SWE declining by 0.14 inches per year over the period. Again, however, the p-value is greater than the typical \\(\\alpha\\), so based on this method the trend is not significantly different from zero. As with the tests for a step change, the p-value is lower for the nonparametric test. A summary plot of the slopes of both methods is helpful. plot(swe$Year,swe$April_1_SWE_in, xlab = "Year",ylab = "Snow water equivalent, in") lines(swe$Year,m$fitted.values, lty=1, col="black") abline(a = ss$coefficients["Intercept"], b = ss$coefficients["Year"], col="red", lty=2) legend("topright", legend=c("Observations","Regression","Thiel-Sen"), col=c("black","black","red"),lty = c(NA,1,2), pch = c(1,NA,NA), cex=0.8) Figure 11.10: Trends of SWE at Four Trees station, CA. 11.2.3 Choosing whether to use parametric or non-parametric tests Using the parameteric tests above (t-test, regression) requires making an assumption about the underlying distribution of the data, which non-parametric tests do not require. When using a parametric test, the assumption of normality can be tested. For example, for the regression residuals can be tested with the following, where the null hypothesis is that the data are nomally distributed. shapiro.test(m$residuals)$p.value #> [1] 0.003647395 This produces a very small p-value (p < 0.01), meaning the null hypothesis that the residuals are normally distributed is rejected with >99% confidence. This means non-parametric test is more appropriate. In general, non-parametric tests are preferred in hydrologic work because data (and residuals) are rarely normally distributed. 11.3 Detecting changes in extreme events when looking at extreme events like the 100-year high tide, the methods are similar to those used in flood frequency analysis. One distinction is that flood frequency often uses a Gumbel or Log-Pearson type 3 distribution. For sea-level rise (and many other extreme events) other distributions are employed, with one common one being the Generalized Extreme Value (GEV), the cumulative distribution of which is described by Equation (11.4). \\[\\begin{equation} F\\left(x;\\mu,\\sigma,\\xi\\right)=exp\\left[-\\left(1+\\xi\\left(\\frac{x-\\mu}{\\sigma}\\right)\\right)^{-1/\\xi}\\right] \\tag{11.4} \\end{equation}\\] The three parameters \\(\\xi\\), \\(\\mu\\), and \\(\\sigma\\) represent a shape, location, and scale of the distribution function. These distribution parameter can be determined using observations of extremes over a long period or over different periods of record, much as the mean, standard, deviation, and skew are used in flood frequency calculations. The distribution can then be used to estimate the probability associated with a specific magnitude event, or conversely the event magnitude associated with a defined risk level. An excellent example of that is from Tebaldi et al. (2012) who analyzed projected extreme sea level changes through the 21st century. Figure 11.11: Projected return periods by 2050 for floods that are 100 yr events during 1959–2008, Tebaldi et al., 2012 An example using the GEV with sea level data is illustrated below. The Tebaldi et al. (2012) paper uses the R package extRemes, which we will use here. The same package has been used to study extreme wind, precipitation, temperature, streamflow, and other events, so it is a very versatile and widely-used package. Install the package if it is not already installed. if (!require('extRemes', quietly = TRUE)) install.packages('extRemes') #> Warning: package 'extRemes' was built under R version 4.2.3 #> Warning: package 'Lmoments' was built under R version 4.2.2 11.3.1 Obtaining and preparing sea-level data Sea-level data can be downloaded directly into R using the rnoaa package. However, NOAA also has a very intuitive interface that allows geographical searching and preliminary viewing of data. From the NOAA Tines & Currents site one can search an area of interest and find a tide gauge with a long record. Figure 11.12. Figure 11.12: Identification of a sea-level gauge on the NOAA Tides & Currents site. By exploring the data inventory for this station, on its home page, the gauge has a very long record, being established in 1854, with measurement of extremes for over a century. Avoid selecting a partial month, or you may not have the ability to download monthly data. Monthly data were downloaded and saved as a csv file, which is available with the hydromisc package. datafile <- system.file("extdata", "sealevel_9414290_wl.csv", package="hydromisc") dat <- read.csv(datafile,header=TRUE) These data were saved in metric units, so all levels are in meters above the selected tidal datum. there are dates indicating the month associated with each value (and day 1 is in there as a placeholder). If there are any missing data they may be labeled as “NaN”. If you see that, a clean way to address it is to first change the missing data to NA (which R recognizes) with a command such as dat[dat == "NaN"] <- NA For this example we are looking at extreme tide levels, so only retain the “Highest” and “Date” columns. peak_sl <- subset(dat, select=c("Date", "Highest")) A final data preparation is to create an annual time series with the the maximum tide level in any year. One way to facilitate this is to add a column of “year.” Then the data can be aggregated by year, creating a new data frame, taking the maximum value for each year (many other functions, like mean, median, etc. can also be used). In this example the column names are changed to make it easier to work with the data. Also, the year column is converted to an integer for plotting purposes. Any rows with NA values are removed. peak_sl$year <- as.integer(strftime(peak_sl$Date, "%Y")) peak_sl_ann <- aggregate(peak_sl$Highest,by=list(peak_sl$year),FUN=max, na.rm=TRUE) colnames(peak_sl_ann) <- c("year","peak_m") peak_sl_ann <- na.exclude(peak_sl_ann) A plot is always helpful. plot(peak_sl_ann$year,peak_sl_ann$peak_m,xlab="Year",ylab="Annual Peak Sea Level, m") Figure 11.13: Annual highest sea-levels relative to MLLW at gauge 9414290. 11.3.2 Conducting the extreme event analysis The question we will attempt to address is whether the 100-year peak tide level (the level exceeded with a 1 percent probability) has increased between the 1900-1930 and 1990-2020 periods. Extract a subset of the data for one period and fit a GEV distribution to the values. peak_sl_sub1 <- subset(peak_sl_ann, year >= 1900 & year <= 1930) gevfit1 <- extRemes::fevd(peak_sl_sub1$peak_m) gevfit1$results$par #> location scale shape #> 2.0747606 0.1004844 -0.2480902 A plot of return periods for the fit distribution is available as well. extRemes::plot.fevd(gevfit1, type="rl") Figure 11.14: Return periods based on the fit GEV distribution for 1900-1930. Points are observations; dashed lines enclose the 95% confidence interval. As is usually the case, a statistical model does well in the area with observations, but the uncertainty increases for extreme values (like estimating a 500-year event from a 30-year record). A longer record produces better (less uncertain) estimates at higher return periods. Based on the GEV fit, the 100-year recurrence interval extreme tide is determined using: extRemes::return.level(gevfit1, return.period = 100, do.ci = TRUE, verbose = TRUE) #> #> Preparing to calculate 95 % CI for 100-year return level #> #> Model is fixed #> #> Using Normal Approximation Method. #> extRemes::fevd(x = peak_sl_sub1$peak_m) #> #> [1] "Normal Approx." #> #> [1] "100-year return level: 2.35" #> #> [1] "95% Confidence Interval: (2.2579, 2.4429)" A check can be done using the reverse calculation, estimating the return period associated with a specified value of highest water level. This can be done by extracting the three GEV parameters, then running the pevd command. loc <- gevfit1$results$par[["location"]] sca <- gevfit1$results$par[["scale"]] shp <- gevfit1$results$par[["shape"]] extRemes::pevd(2.35, loc = loc, scale = sca , shape = shp, type = c("GEV")) #> [1] 0.9898699 This returns a value of 0.99 (this is the CDF value, or the probability of non-exceedence, F). Recalling that return period, \\(T=1/P=1/(1-F)\\), where P=prob. of exceedence; F=prob. of non-exceedence, the result that 2.35 meters is the 100-year highest water level is validated. Repeating the calculation for a more recent period: peak_sl_sub2 <- subset(peak_sl_ann, year >= 1990 & year <= 2020) gevfit2 <- extRemes::fevd(peak_sl_sub2$peak_m) extRemes::return.level(gevfit2, return.period = 100, do.ci = TRUE, verbose = TRUE) #> #> Preparing to calculate 95 % CI for 100-year return level #> #> Model is fixed #> #> Using Normal Approximation Method. #> extRemes::fevd(x = peak_sl_sub2$peak_m) #> #> [1] "Normal Approx." #> #> [1] "100-year return level: 2.597" #> #> [1] "95% Confidence Interval: (2.3983, 2.7957)" This returns a 100-year high tide of 2.6 m for 1990-2020, a 10.6 % increase over 1900-1930. Another way to look at this is to find out how the frequency of the past (in this case, 1900-1930) 100-year event has changed with rising sea levels. Repeating the calculations from before to capture the GEV parameters for the later period, and then plugging in the 100-year high tide from the early period: loc2 <- gevfit2$results$par[["location"]] sca2 <- gevfit2$results$par[["scale"]] shp2 <- gevfit2$results$par[["shape"]] extRemes::pevd(2.35, loc = loc2, scale = sca2 , shape = shp2, type = c("GEV")) #> [1] 0.7220968 This returns a value of 0.72 (72% non-exceedence, or 28% exceedance, in other words we expect to see an annual high tide of 2.35 m or higher in 28% of the years). The return period of this is calculated as above: T = 1/(1-0.72) = 3.6 years. So, what was the 100-year event in 1900-1930 is about a 4-year event now. "],["management-of-water-resources-systems.html", "Chapter 12 Management of water resources systems 12.1 A simple linear system with two decision variables 12.2 More complex linear programming: reservoir operation 12.3 More Realistic Reservoir Operation: non-linear programming", " Chapter 12 Management of water resources systems Figure 12.1: Lookout Point Dam on the Middle Fork Willamette River source: U.S. Army Corps of Engineers Water resources systems tend to provide a variety of benefits, such as flood control, hydroelectric power, recreation, navigation, and irrigation. Each of these provides a benefit that can quantified, and there are also associated costs that can be quantified. A challenge engineers face is how to manage a system to balance the different uses. Mathematical optimization, which can take many forms, is employed to do this. Introductions to linear programming and other forms of optimization are plentiful. For a background on the concepts and theories, refer to other references. An excellent, comprehensive reference is Water Resource Systems Planning and Management (Loucks & Van Beek, 2017), freely available online. What follows is a demonstration of using some of these optimization methods, but no recap of the theory is provided. The examples here use linear systems, where the objective function and constraints are all linear functions of the decision variables. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 12.1 A simple linear system with two decision variables 12.1.1 Overview of problem formulation One of the simplest systems to optimize is a linear system of two variables, which means a graphical solution in 2-d is possible. This first demonstration is one example, a reworking of a textbook example (Wurbs & James, 2002). To set up a solution, three things must be described: the decision variables – the variables for which optimal values are sought the constraints – physical or other limitations on the decision variables (or combinations of them) the objective function – an expression, using the decision variables, of what is to be minimized or maximized. 12.1.2 Setting up an example Example 12.1 To supply water to a community, there are two sources of water available with different levels of total dissolved solids (TDS): groundwater (TDS=980 mg/l) and surface water from a reservoir (TDS=100 mg/l). The first two constraints are that a total demand of water of 7,500 m\\(^3\\)must be met, and the delivered water (mixed groundwater and reservoir supplies) can have a maximum TDS of 500 mg/l. This is illustrated in Figure 12.2. Figure 12.2: A schematic of the system for this example. Two additional constraints are that groundwater withdrawal cannot exceed 4,000 m\\(^3\\) and reservoir withdrawals cannot exceed 7,500 m\\(^3\\). There are two decision variables: X1=groundwater and X2=reservoir supply. The objective is to minimize the reservoir withdrawal while meeting the constraints. The TDS constraint is reorganized as: \\[\\frac{800~X1+100~X2}{X1+X2}\\le 400~~~or~~~ 400~X1-300~X2\\le 0\\] Rewriting the other three constraints as functions of the decision variables: \\[\\begin{align*} X1+X2 \\ge 7500 \\\\ X1 \\le 4000 \\\\ X2 \\le 7500 \\end{align*}\\] Notice that the contraints are all expressed as linear functions of the decision variables (left side of the equations) and a value on the right. 12.1.3 Graphing the solution space While this can only be done easily for systems with only two decision variables, a plot of the solution space can be done here by graphing all of the constrains and shading the region where all constraints are satisfied. Figure 12.3: The solution space, shown as the cross-hatched area. In the feasible region, it is clear that the minimum reservoir supply, X2, would be a little larger than 4,000 m\\(^3\\). 12.1.4 Setting up the problem in R An R package useful for solving linear programming problems is the lpSolveAPI package. Install that if necessary, and also install the knitr and kableExtra packages, since thet are very useful for printing the many tables that linear programming involves. Begin by creating an empty linear model. The (0,2) means zero constraints (they’ll be added later) and 2 decision variables. The next two lines just assign names to the decision variables. Because we will use many functions of the lpSolveAPI package, load the library first. Load the kableExtra package too. library(lpSolveAPI) library(kableExtra) example.lp <- lpSolveAPI::make.lp(0,2) # 0 constraints and 2 decision variables ColNames <- c("X1","X2") colnames(example.lp) <- ColNames # Set the names of the decision variables Now set up the objective function. Minimization is the default goal of this R function, but we’ll set it anyway to be clear. The second argument is the vector of coefficients for the decision variables, meaning X2 is minimized. set.objfn(example.lp,c(0,1)) x <- lp.control(example.lp, sense="min") #save output to a dummy variable The next step is to define the constraints. Four constraints were listed above. Additional constraints could be added that \\(X1\\ge 0\\) and \\(X2\\ge 0\\), however, variable ranges in this LP solver are [0,infinity] by default, so for this example and we do not need to include constraints for positive results. If necessary, decision variable valid ranges can be set using set.bounds(). Constraints are defined with the add.constraint command. Figure 12.4 provides an annotated example of the use of an add.constraint command. Figure 12.4: Annotated example of an add.constraint command. Type ?add.constraint in the console for additional details. The four constraints for this example are added with: add.constraint(example.lp, xt=c(400,-300), type="<=", rhs=0, indices=c(1,2)) add.constraint(example.lp, xt=c(1,1), type=">=", rhs=7500) add.constraint(example.lp, xt=c(1,0), type="<=", rhs=4000) add.constraint(example.lp, xt=c(0,1), type="<=", rhs=7500) That completes the setup of the linear model. You can view the model to verify the values you entered by typing the name of the model. example.lp #> Model name: #> X1 X2 #> Minimize 0 1 #> R1 400 -300 <= 0 #> R2 1 1 >= 7500 #> R3 1 0 <= 4000 #> R4 0 1 <= 7500 #> Kind Std Std #> Type Real Real #> Upper Inf Inf #> Lower 0 0 If it has a large number of decision variables it only prints a summary, but in that case you can use write.lp(example.lp, \"example_lp.txt\", \"lp\") to create a viewable file with the model. Now the model can be solved. solve(example.lp) #> [1] 0 If the solver finds an optimal solution it will return a zero. 12.1.5 Interpreting the optimal results View the final value of the objective function by retrieving it and printing it: optimal_solution <- get.objective(example.lp) print(paste0("Optimal Solution = ",round(optimal_solution,2),sep="")) #> [1] "Optimal Solution = 4285.71" For more detail, recover the values of each of the decision variables. vars <- get.variables(example.lp) Next you can print the sensitivity report – a vector of M constraints followed by N decision variables. It helps to create a data frame for viewing and printing the results. Nicer printing is achieved using the kable and kableExtra functions. sens <- get.sensitivity.obj(example.lp)$objfrom results1 <- data.frame(variable=ColNames,value=vars,gradient=as.integer(sens)) kbl(results1, booktabs = TRUE) %>% kable_styling(full_width = F) variable value gradient X1 3214.286 -1 X2 4285.714 0 The above shows decision variable values for the optimal solution. The Gradient is the change in the objective function for a unit increase in the decision variable. Here a negative gradient for decision variable \\(X1\\), the groundwater withdrawal, means that increasing the groundwater withdrawal will have a negative effect on the objective function, (to minimize \\(X2\\)): that is intuitive, since increasing groundwater withdrawal can reduce reservoir supply on a one-to-one basis. To look at which constraints are binding, retrieve the $duals part of the output. m <- length(get.constraints(example.lp)) #number of constraints duals <- get.sensitivity.rhs(example.lp)$duals[1:m] results2 <- data.frame(constraint=c(seq(1:m)),multiplier=duals) kbl(results2, booktabs = TRUE) %>% kable_styling(full_width = F) constraint multiplier 1 -0.0014286 2 0.5714286 3 0.0000000 4 0.0000000 The multipliers for each constraint are referred to as Lagrange multipliers (or shadow prices). Non-zero values of the multiplier indicate a binding capability of that constraint, and the change in the objective function that would result from a unit change in that value. Zero values are non-binding, since a unit change in their value has no effect on the optimal result. For example, constraint 3, that \\(X1 \\le 4000\\), with a multiplier of zero, could be changed (at least a small amount – there can be a limit after which it can become binding) with no effect on the optimal solution. Similarly, if constraint 2, \\(X1+X2 \\ge 7500\\), were were increased, the objective function (the optimal reservoir supply) would also increase. 12.2 More complex linear programming: reservoir operation Water resources systems are far too complicated to be summarized by two decision variables and only a few constraints, as above. Example 12.2 demonstrate how the same procedure can be applied to a slightly more complex system. This is a reformulation of an example from the same text as referenced above (Wurbs & James, 2002). Example 12.2 A river flows into a storage reservoir where the operator must decide how much water to release each month. For simplicity, inflows will by described by a fixed sequence of 12 monthly flows. There are two downstream needs to satisfy: hydropower generation and irrigation diversions. Benefits are derived from these two uses: revenues are $ 800 per 10\\(^6\\)m\\(^3\\) of water diverted for irrigation, and $ 350 per 10\\(^6\\)m\\(^3\\) for hydropower generation. The objective is to determine the releases that will maximize the total revenue. There are physical characteristics of the system that provide some constraints, and others are derived from basic physics, such as the conservation of mass. A schematic of the system is shown in Figure 12.5. Figure 12.5: A schematic of the water resources system for this example. Diversions through the penstock to the hydropower facility are limited to its capacity of 160 10\\(^6\\)m\\(^3\\)/month. For reservoir releases less than that, all of the released water can generate hydropower; flows above that capacity will spill without generating hydropower benefits. The reservoir has a volume of 550 10\\(^6\\)m\\(^3\\), so anything above that will have to be released. Assume the reservoir is at half capacity initially. The irrigation demand varies by month, and diversions up to the demand will produce benefits. These are: Month Demand, 10\\(^6\\)m\\(^3\\) Month Demand, 10\\(^6\\)m\\(^3\\) Month Demand, 10\\(^6\\)m\\(^3\\) Jan (1) 0 May (5) 40 Sep (9) 180 Feb (2) 0 Jun (6) 130 Oct (10) 110 Mar (3) 0 Jul (7) 230 Nov (11) 0 Apr (4) 0 Aug (8) 250 Dec (12) 0 12.2.1 Problem summary There are 48 decision variables in this problem, 12 monthly values for reservoir storage (s\\(_1\\)-s\\(_{12}\\)), release (r\\(_1\\)-r\\(_{12}\\)), hydropower generation (h\\(_1\\)-h\\(_{12}\\)), and agricultural diversion (d\\(_1\\)-d\\(_{12}\\)). The objective function is to maximize the revenue, which is expressed by Equation (12.1). \\[\\begin{equation} Maximize~ x_0=\\sum_{i=1}^{12}\\left(350h_i+800d_i\\right) \\tag{12.1} \\end{equation}\\] Constraints will need to be described to apply the limits to hydropower diversion and storage capacity, and to limit agricultural diversions to no more than the demand. 12.2.2 Setting up the problem in R Create variables for the known or assumed initial values for the system. penstock_cap <- 160 #penstock capacity in million m3/month res_cap <- 550 #reservoir capacity in million m3 res_init_vol <- res_cap/2 #set initial reservoir capacity equal to half of capacity irrig_dem <- c(0,0,0,0,40,130,230,250,180,110,0,0) revenue_water <- 800 #revenue for delivered irrigation water, $/million m3 revenue_power <- 350 #revenue for power generated, $/million m3 A time series of 20 years (January 2000 through December 2019) of monthly flows for this exercise is included with the hydromisc package. Load that and extract the first 12 months to use in this example. inflows_20years <- hydromisc::inflows_20years inflows <- as.numeric(window(inflows_20years, start = c(2000, 1), end = c(2000, 12))) It helps to illustrate how the irrigation demands and inflows vary, and therefore why storage might be useful in regulating flow to provide more reliable irrigation deliveries. par(mgp=c(2,1,0)) ylbl <- expression(10 ^6 ~ m ^3/month) plot(inflows, type="l", col="blue", xlab="Month", ylab=ylbl) lines(irrig_dem, col="darkgreen", lty=2) legend("topright",c("Inflows","Irrigation Demand"),lty = c(1,2), col=c("blue","darkgreen")) grid() Figure 12.6: Inflows and irrigation demand. 12.2.3 Building the linear model Following the same steps as for a simple 2-variable problem, begin by setting up a linear model. Because there are so many decision variables, it helps to add names to them. reser.lp <- make.lp(0,48) DecisionVarNames <- c(paste0("s",1:12),paste0("r",1:12),paste0("h",1:12),paste0("d",1:12)) colnames(reser.lp) <- DecisionVarNames From this point on, the decision variables will be addressed by their indices, that is, their numeric position in this sequence of 48 values. To summarize their positions: Decision Variables Indices (columns) Storage (s1-s12) 1-12 Release (r1-r12) 13-24 Hydropower (h1-h12) 25-36 Irrigation diversion (d1-d12) 37-48 Using these indices as a guide, set up the objective function and initialize the linear model. While not necessary, redirecting the output of the lp.control to a variable prevents a lot of output to the console. The following takes the revenue from hydropower and irrigation (in $ per 10\\(^6\\)m\\(^3\\)/month), multiplies them by the 12 monthly values for the hydropower flows and the irrigation deliveries, and sets the objective to maximize their sum, as described by Equation (12.1). set.objfn(reser.lp,c(rep(revenue_power,12),rep(revenue_water,12)),indices = c(25:48)) x <- lp.control(reser.lp, sense="max") With the LP setup, the constraints need to be applied. Negative releases, storage, or river flows don’t make sense, so they all need to be positive, so \\(s_t\\ge0\\), \\(r_t\\ge0\\), \\(h_t\\ge0\\) for all 12 months, but because the lpSolveAPI package assumes all decision variables have a range of \\(0\\le x\\le \\infty\\) these do not need to be explicitly added as constraints. When using other software packages these may need to be included. 12.2.3.1 Constraints 1-12: Maximum storage The maximum capacity of the reservoir cannot be exceeded in any month, or \\(s_t\\le 600\\) for all 12 months. This can be added in a simple loop: for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1), type="<=", rhs=res_cap, indices=c(i)) } 12.2.3.2 Constraints 13-24: Irrigation diversions The irrigation diversions should never exceed the demand. While for some months they are set to zero, since decision variables are all assumed non-negative, we can just assign all irrigation deliveries using the \\(\\le\\) operator. for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1), type="<=", rhs=irrig_dem[i], indices=c(i+36)) } 12.2.3.3 Constraints 25-36: Hydropower Hydropower release cannot exceed the penstock capacity in any month: \\(h_t\\le 180\\) for all 12 months. This can be done following the example above for the maximum storage constraint for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1), type="<=", rhs=penstock_cap, indices=c(i+24)) } 12.2.3.4 Constraints 37-48: Reservoir release Reservoir release must equal or exceed irrigation deliveries, which is another way of saying that the water remaining in the river after the diversion cannot be negative. In other words \\(r_1-d_1\\ge 0\\), \\(r_2-d_2\\ge 0\\), … for all 12 months. For constraints involving more than one decision variable the constraint equations look a little different, and keeping track of the indices associated with each decision variable is essential. for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1,-1), type=">=", rhs=0, indices=c(i+12,i+36)) } 12.2.3.5 Constraints 49-60: Hydropower Hydropower generation will be less than or equal to reservoir release in every month, or \\(r_1-h_1\\ge 0\\), \\(r_2-h_2\\ge 0\\), … for all 12 months. for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1,-1), type=">=", rhs=0, indices=c(i+12,i+24)) } 12.2.3.6 Constraints 61-72: Conservation of mass Finally, considering the reservoir, the inflow minus the outflow in any month must equal the change in storage over that month. That can be expressed in an equation with decision variables on the left side as: \\[s_t-s_{t-1}+r_t=inflow_t\\] where \\(t\\) is a month from 1-12 and \\(s_t\\) is the storage at the end of month \\(t\\). We need to use the initial reservoir volume, \\(s_0\\) (given above in the problem statement) for the first month’s mass balance, so the above would become \\(s_1-s_0+r_1=inflow_1\\), or \\(s_1+r_1=inflow_1+s_0\\). All subsequent months can be assigned in a loop, add.constraint(reser.lp, xt=c(1,1), type="=", rhs=inflows[1]+res_init_vol, indices=c(1,13)) for (i in seq(2,12)) { add.constraint(reser.lp, xt=c(1,-1,1), type="=", rhs=inflows[i], indices=c(i,i-1,i+12)) } This completes the LP model setup. Especially for larger models, it is helpful to save the model. You can use something like write.lp(reser.lp, \"reservoir_LP.txt\", \"lp\") to create a file (readable using any text file viewer, like Notepad++) with all of the model details. It can also be read into R with the read.lp command to load the complete LP. The beginning of the file for this LP looks like: Figure 12.7: The top of the linear model file produced by write.lp(). 12.2.3.7 Solving the model and interpreting output Solve the LP and retrieve the value of the objective function. solve(reser.lp) #> [1] 0 get.objective(reser.lp) #> [1] 1230930 To look at the hydropower generation, and to see how often spill occurs, it helps to view the associated decision variables (as noted above, these are indices 12-24 and 25-36). vars <- get.variables(reser.lp) # retrieve decision variable values results0 <- data.frame(variable=DecisionVarNames,value=vars) r0 <- cbind(results0[13:24, ], results0[25:36, ]) rownames(r0) <- c() names(r0) <- c("Decision Variable","Value","Decision Variable","Value") kbl(r0, booktabs = TRUE) %>% kable_styling(bootstrap_options = c("striped","condensed"),full_width = F) Decision Variable Value Decision Variable Value r1 160.00000 h1 160.00000 r2 160.00000 h2 160.00000 r3 160.00000 h3 160.00000 r4 89.44193 h4 89.44193 r5 40.00000 h5 40.00000 r6 130.00000 h6 130.00000 r7 230.00000 h7 160.00000 r8 197.58616 h8 160.00000 r9 160.00000 h9 160.00000 r10 112.03054 h10 112.03054 r11 96.96217 h11 96.96217 r12 105.45502 h12 105.45502 Figure 12.8: Reservoir releases and hydropower water use for optimal solution. For this optimal solution, the releases exceed the capacity of the penstock supplying the hydropower plant in July and August, meaning there would be reservoir spill during those months. Another part of the output that is important is to what degree irrigation demand is met. the irrigation delivery is associated with decision variables with indices 37-48. Decision Variable Value Irrigation Demand, 10\\(^6\\)m\\(^3\\) d1 0.0000 0 d2 0.0000 0 d3 0.0000 0 d4 0.0000 0 d5 40.0000 40 d6 130.0000 130 d7 230.0000 230 d8 197.5862 250 d9 160.0000 180 d10 110.0000 110 d11 0.0000 0 d12 0.0000 0 August and September see a shortfall in irrigation deliveries where full demand is not met. Finally, finding which constraints are binding can provide insights into how a system might be modified to improve the optimal solution. This is done similarly to the simpler problem above, by retrieving the duals portion of the sensitivity results. To address the question of whether the size of the reservoir is a binding constraint, that is, whether increasing reservoir size would improve the optimal results, only the first 12 constraints are printed. m <- length(get.constraints(reser.lp)) # retrieve the number of constraints duals <- get.sensitivity.rhs(reser.lp)$duals[1:m] results2 <- data.frame(Constraint=c(seq(1:m)),Multiplier=duals) kbl(results2[1:12,], booktabs = TRUE) %>% kable_styling(bootstrap_options = c("striped","condensed"),full_width = F) Constraint Multiplier 1 0 2 0 3 0 4 0 5 450 6 0 7 0 8 0 9 0 10 0 11 0 12 0 For this example, in only one month would a larger reservoir have a positive impact on the maximum revenue. 12.3 More Realistic Reservoir Operation: non-linear programming While the simple examples above illustrate how an optimal solution can be determined for a linear (and deterministic) reservoir system, in reality reservoirs are much more complex. Most reservoir operation studies use sophisticated software to develop and apply Rule Curves for reservoirs, aiming to optimally store and release water, preserving the storage pools as needed. Figure 12.9 shows how reservoir volumes are managed. Figure 12.9: Sample reservoir operating goals U.S. Army Corps of Engineers Many rule curves depend on the condition of the system at some prior time. Figure 12.10 shows a rule curve used to operate Folsom Reservoir on the American River in California, where the target storage depends on the total upstream storage available. Figure 12.10: Multiple rule corves based on upstream storage U.S. Army Corps of Engineers Report RD-48 One method for deriving an optimal solution for the nonlinear and random processes in a water resources system is stochastic dynamic programming (SDP). Like LP, SDP uses algorithms that optimize an objective function under specified constraints. However, SDP can accommodate non-linear, dynamic outcomes, such as those associated with floods risks or other stochastic events. SDP can combine the stochastic information with reservoir management actions, where the outcome of decisions can be dependent on the state of the system (as in Figure 12.10). Constraints can be set to be met a certain percentage of the time, rather than always. 12.3.1 Reservoir operation While SDP is a topic that is far more advanced than what will be covered here, one R package will be introduced. For reservoir optimization, the R package reservoir can use SDP to derive an optimal operating rule for a reservoir given a sequence of inflows using a single or multiple constraints. The package can also take any derived rule curve and operate a reservoir using it, which is what will be demonstrated here. First, place the optimal releases, according to the LP above, into a new vector to be used as a set of target releases for the reservoir operation. target_release <- results0[13:24, ]$value The reservoir can be operated (for the same 12-month period, with the same 12 inflows as above) with a single command. x <- reservoir::simRes(inflows, target_release, res_cap, plot = F) The total revenue from hydropower generation and irrigation deliveries is computed as follows. irrig_releases <- pmin(x$releases,irrig_dem) irrig_benefits <- sum(irrig_releases*revenue_water) hydro_releases <- pmin(x$releases,penstock_cap) hydro_benefits <- hydro_releases*revenue_power sum(irrig_benefits,hydro_benefits) #> [1] 1230930 Unsurprisingly, this produces the same result as with the LP example. 12.3.2 Performing stochastic dynamic programming The optimal releases, or target releases, were established based on a single year. the SDP in the reservoir package can be used to determine optimal releases based on a time series of inflows. Here the entire 20-year inflow sequence is used to generate a multiobjective optimal solution for the system. A weighting must be applied to describe the importance of meeting different parts of the objective function. The target release(s) cannot be zero, so a small constant is added. weight_water <- revenue_water/(revenue_water + revenue_power) weight_power <- revenue_power/(revenue_water + revenue_power) z <- reservoir::sdp_multi(inflows_20years, cap=res_cap, target = irrig_dem+0.01, R_max = penstock_cap, spill_targ = 0.95, weights = c(weight_water, weight_power, 0.00), loss_exp = c(1, 1, 1), tol=0.99, S_initial=0.5, plot=FALSE) irrig_releases2 <- pmin(z$releases,irrig_dem) irrig_benefits2 <- sum(irrig_releases2*revenue_water) hydro_releases2 <- pmin(z$releases,penstock_cap) hydro_benefits2 <- hydro_releases2*revenue_power sum(irrig_benefits2,hydro_benefits2)/20 #> [1] 911240 For a 20-year period, the average annual revenue will always be less than that for a single year where the optimal releases are designed based on that same year. "],["groundwater.html", "Chapter 13 Groundwater", " Chapter 13 Groundwater Figure 13.1: A conceptual aquifer with a pumping well U.S. Geological survey Groundwater content is forthcoming… "],["references.html", "References", " References Allen, R. G., & United Nations, F. and A. O. of the (Eds.). (1998). Crop evapotranspiration: Guidelines for computing crop water requirements. Rome: Food; Agriculture Organization of the United Nations. Astagneau, P. C., Thirel, G., Delaigue, O., Guillaume, J. H. A., Parajka, J., Brauer, C. C., et al. (2021). Technical note: Hydrology modelling R packages – a unified analysis of models and practicalities from a user perspective. Hydrology and Earth System Sciences, 25(7), 3937–3973. https://doi.org/10.5194/hess-25-3937-2021 Camp, T. R. (1946). Design of sewers to facilitate flow. Sewage Works Journal, 18, 3–16. Davidian, Jacob. (1984). Computation of water-surface profiles in open channels (No. Techniques of Water-Resources Investigations, Book 3, Chapter A15). https://doi.org/10.3133/twri03A15 Ductile Iron Pipe Research Association. (2016). Thrust Restraint Design for Ductile Iron Pipe, Seventh Edition. Retrieved from https://dipra.org/technical-resources England, J.F., Cohn, T.A., Faber, B.A., Stedinger, J.R., Thomas, W.O., Veilleux, A.G., et al. (2019). Guidelines for Determining Flood Flow Frequency Bulletin 17C (Techniques and {Methods}) (p. 148). Reston, Virginia: U.S. Department of the Interior, U.S. Geological Survey. Retrieved from https://doi.org/10.3133/tm4B5 Finnemore, E. J., & Maurer, E. (2024). Fluid mechanics with civil engineering applications (Eleventh edition). New York: McGraw-Hill. Fox-Kemper, B., Hewitt, H., Xiao, C., Aðalgeirsdóttir, G., Drijfhout, S., Edwards, T., et al. (2021). Ocean, Cryosphere and Sea Level Change. In Climate Change 2021: The physical science basis. Contribution of working group I to the sixth assessment report of the intergovernmental panel on climate change. Masson-, V. Delmotte, P. Zhai, A. Pirani, SL Connors, C. Péan, S. Berger, et …. Haaland, S. E. (1983). Simple and Explicit Formulas for the Friction Factor in Turbulent Pipe Flow. Journal of Fluids Engineering, 105(1), 89–90. https://doi.org/10.1115/1.3240948 Helsel, D.R., Hirsch, R.M., Ryberg, K.R., Archfield, S.A., & Gilroy, E.J. (2020). Statistical methods in water resources: U.S. Geological Survey Techniques and Methods, book 4, chap. A3 (p. 458). U.S. Geological Survey. Retrieved from https://doi.org/10.3133/tm4a3 Loucks, D. P., & Van Beek, E. (2017). Water Resource Systems Planning and Management. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-44234-1 Lovelace, R., Nowosad, J., & Münchow, J. (2019). Geocomputation with R. Boca Raton: CRC Press, Taylor; Francis Group, CRC Press is an imprint of theTaylor; Francis Group, an informa Buisness, A Chapman & Hall Book. Marshall, J. D., & Toffel, M. W. (2005). Framing the Elusive Concept of Sustainability: A Sustainability Hierarchy. Environmental Science & Technology, 39(3), 673–682. https://doi.org/10.1021/es040394k McCuen, R. (2016). Hydrologic analysis and design, 4th. Pearson Education. Moore, J., Chatsaz, M., d’Entremont, A., Kowalski, J., & Miller, D. (2022). Mechanics Map Open Textbook Project: Engineering Mechanics. Retrieved from https://eng.libretexts.org/Bookshelves/Mechanical_Engineering/Mechanics_Map_(Moore_et_al.) Pebesma, E. J., & Bivand, R. (2023). Spatial data science: With applications in R (First edition). Boca Raton, FL: CRC Press. Peterka, Alvin J. (1978). Hydraulic design of stilling basins and energy dissipators. Department of the Interior, Bureau of Reclamation. R Core Team. (2022). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/ Searcy, J. K., & Hardison, C. H. (1960). Double-mass curves. US Government Printing Office. Slater, L. J., Thirel, G., Harrigan, S., Delaigue, O., Hurley, A., Khouakhi, A., et al. (2019). Using R in hydrology: A review of recent developments and future directions. Hydrology and Earth System Sciences, 23(7), 2939–2963. https://doi.org/10.5194/hess-23-2939-2019 Sturm, T. W. (2021). Open Channel Hydraulics (3rd Edition). New York: McGraw-Hill Education. Retrieved from https://www.accessengineeringlibrary.com/content/book/9781260469707 Swamee, P. K., & Jain, A. K. (1976). Explicit Equations for Pipe-Flow Problems. Journal of the Hydraulics Division, 102(5), 657–664. https://doi.org/10.1061/JYCEAJ.0004542 Tebaldi, C., Strauss, B. H., & Zervas, C. E. (2012). Modelling sea level rise impacts on storm surges along US coasts. Environmental Research Letters, 7(1), 014032. https://doi.org/10.1088/1748-9326/7/1/014032 Wurbs, R. A., & James, W. P. (2002). Water resources engineering. Upper Saddle River, NJ: Prentice Hall. "]] +[["index.html", "Hydraulics and Water Resources: Examples Using R Preface Introduction to R and RStudio Citing this reference Copyright", " Hydraulics and Water Resources: Examples Using R Ed Maurer Professor, Civil, Environmental, and Sustainable Engineering Department, Santa Clara University 2024-03-06 Preface This is a compilation of various R exercises and examples created over many years. They have been used mostly in undergraduate civil engineering classes including fluid mechanics, hydraulics, and water resources. This is a dynamic work, and will be regularly updated as errors are identified, improved presentation is developed, or new topics or examples are introduced. I welcome any suggestions or comments. In what follows, text will be intentionally brief. More extensive discussion and description can be found in any fluid mechanics, applied hydraulics, or water resources engineering text. Symbology for hydraulics in this reference generally follows that of Finnemore and Maurer (2024). Fundamental equations will be introduced though the emphasis will be on applications to solve common problems. Also, since this is written by a civil engineer, the only fluids included are water and air, since that accounts for nearly all problems encountered in the field. Solving water problems is rarely done by hand calculations, though the importance of performing order of magnitude ‘back of the envelope’ calculations cannot be overstated. Whether using a hand calculator, spreadsheet, or a programming language to produce a solution, having a sense of when an answer is an outlier will help catch errors. Scripting languages are powerful tools for performing calculations, providing a fully traceable and reproducible path from your input to a solution. Open source languages have the benefit of being free to use, and invite users to be part of a community helping improve the language and its capabilities. The language of choice for this book is R (R Core Team, 2022), chosen for its straightforward syntax, powerful graphical capabilities, wide use in engineering and in many other disciplines, and by using the RStudio interface, it can look and feel a lot like Matlab® with which most engineering students have some experience. Introduction to R and RStudio No introduction to R or RStudio is provided here. It is assumed that the reader has installed R (and RStudio), is comfortable installing and updating packages, and understands the basics of R scripting. Some resources that can provide an introduction to R include: A brief overview, aimed at students at Santa Clara University. An Introduction to R, a comprehensive reference by the R Core Team. Introduction to Programming with R by Stauffer et al., materials for a university course, including interactive exercises. R for Water Resources Data Science, with both introductory and intermediate level courses online. As I developed these exercises and text, I learned R through the work of many others, and the excellent help offered by skilled people sharing their knowledge on stackoverflow. The methods shown here are not the only ways to solve these problems, and users are invited to share alternative or better solutions. Citing this reference Maurer, Ed, 2023. Hydraulics and Water Resources: Examples Using R, doi:10.5281/zenodo.7576843 https://edm44.github.io/hydr-watres-book/. Copyright This work is provided under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). As a summary, this license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. This is a summary of (and not a substitute for) the license. "],["units-in-fluid-mechanics.html", "Chapter 1 Units in Fluid Mechanics", " Chapter 1 Units in Fluid Mechanics Before beginning with problem solving methods it helps to recall some important quantities in fluid mechanics and their associated units. While the world has generally moved forward into standardizing the use of the SI unit system, the U.S. stubbornly holds onto the antiquated US (sometimes called the British Gravitational, BG) system. This means practicing engineers must be familiar with both systems, and be able to convert between the two systems. These important quantities are shown in Table 1.1. Table 1.1: Dimensions and units for common quantities. Quantity Symbol Dimensions US (or BG) Units SI Units US to SI multiply by Length L \\(L\\) \\(ft\\) \\(m\\) 0.3048 Acceleration a \\(LT^{-2}\\) \\(ft/s^2\\) \\(m/s^{2}\\) 0.3048 Mass m \\(M\\) \\(slug\\) \\(kg\\) 14.59 Force F \\(F\\) \\(lb\\) \\(N\\) 4.448 Density \\(\\rho\\) \\(ML^{-3}\\) \\(slug/ft^3\\) \\(kg/m^3\\) 515.4 Energy/Work FL \\({ft}\\cdot{lb}\\) \\({N}\\cdot{m}=joule (J)\\) 1.356 Flowrate Q \\(L^{3}/T\\) \\(ft^{3}/s\\)=cfs \\(m^{3}/s\\) 0.02832 Kinematic viscocity \\(\\nu\\) \\(L^{2}/T\\) \\(ft^{2}/s\\) \\(m^{2}/s\\) 0.0929 Power \\(FLT^{-1}\\) \\({ft}\\cdot{lb/s}\\) \\({N}\\cdot{m/s}=watt (W)\\) 1.356 Pressure p \\(FL^{-2}\\) \\(lb/in^2=psi\\) \\(N/m^2=Pa\\) 6895 Specific Weight \\(\\gamma\\) \\(FL^{-3}\\) \\(lb/ft^3\\) \\(N/m^3\\) 157.1 Velocity V \\(LT^{-1}\\) \\(ft/s\\) \\(m/s\\) 0.3048 (Dynamic) Viscocity \\(\\mu\\) \\(FTL^{-2}\\) \\({lb}\\cdot{s/ft^2}\\) \\({N}\\cdot{s/m^2}={Pa}\\cdot{s}\\) 47.88 Volume \\(\\forall\\) \\(L^3\\) \\(ft^3\\) \\(m^3\\) 0.02832 There are many other units that must be accommodated. For example, one may encounter the poise to describe (dynamic) viscosity (\\(1~Pa*s = 10~poise\\)), or the stoke for kinematic viscocity (\\(1~m^2/s=10^4~stokes\\)). Many hydraulic systems use gallons per minute (gpm) as a unit of flow (\\(1~ft^3/s=448.8~gpm\\)), and larger water systems often use millions of gallons per day (mgd) (\\(1~mgd = 1.547~ft^3/s\\)). For volume, the SI system often uses liters (\\(l\\)) instead of \\(m^3\\) (\\(1~m^3=1000~l\\)). One regular conversion that needs to occur is the translation between mass (m) and weight (W), where \\(W=mg\\), where \\(g\\) is gravitational acceleration on the earth’s surface: \\(g=9.81~m/s^2=32.2~ft/s^2\\). When working with forces (such as with momentum problems or hydrostatic forces) be sure to work with weights/forces, not mass. It is straightforward to use conversion factors in the table to manipulate values between the systems, multiplying by the factor shown to go from US to SI units, or dividing to do the \\[{1*10^{-6}~m^2/s}*\\frac{1 ~ft^2/s}{0.0929~m^2/s}=1.076*10^{-5} ~ft^2/s\\] Another example converts between two quantities in the US system: 100 gallons per minute to cfs: \\[{100 ~gpm}*\\frac{1 ~cfs}{448.8 ~gpm}=0.223 ~cfs\\] The units package in R can do these conversions and more, and also checks that conversions are permissible (producing an error if incompatible units are used). units::units_options(set_units_mode = "symbols") Q_gpm <- units::set_units(100, gallon/min) Q_gpm #> 100 [gallon/min] Q_cfs <- units::set_units(Q_gpm, ft^3/s) Q_cfs #> 0.2228009 [ft^3/s] Repeating the unit conversion of viscosity using the units package: Example 1.1 Convert kinematic viscosity from SI to Eng units. nu <- units::set_units(1e-6, m^2/s) nu #> 1e-06 [m^2/s] units::set_units(nu, ft^2/s) #> 1.076391e-05 [ft^2/s] The units package also produces correct units during operations. For example, multiplying mass by g should produce weight. Example 1.2 Using the units package to produce correct units during mathematical operations. #If you travel at 88 ft/sec for 1 hour, how many km would you travel? v <- units::set_units(88, ft/s) t <- units::set_units(1, hour) d <- v*t d #> 316800 [ft] units::set_units(d, km) #> 96.56064 [km] #What is the weight of a 4 slug mass, in pounds and Newtons? m <- units::set_units(4, slug) g <- units::set_units(32.2, ft/s^2) w <- m*g #Notice the units are technically correct, but have not been simplified in this case w #> 128.8 [ft*slug/s^2] #These can be set manually to verify that lbf (pound-force) is a valid equivalent units::set_units(w, lbf) #> 128.8 [lbf] units::set_units(w, N) #> 572.9308 [N] "],["properties-of-water.html", "Chapter 2 Properties of water (and air) 2.1 Properties important for water standing still 2.2 Properties important for moving water 2.3 Atmosperic Properties", " Chapter 2 Properties of water (and air) Fundamental properties of water allow the description of the forces it exerts and how it behaves while in motion. A table of these properties can be generated with the hydraulics package using a command like water_table(units = \"SI\"). A summary of basic water properties, which vary with temperature, is shown in Table 2.1 for SI units and Table 2.2 for US (or Eng) units. Table 2.1: Water properties in SI units Temp Density Spec_Weight Viscosity Kinem_Visc Sat_VP Surf_Tens Bulk_Mod C kg m-3 N m-3 N s m-2 m2 s-1 Pa N m-1 Pa \\(0\\) \\(999.9\\) \\(9809\\) \\(1.734 \\times 10^{-3}\\) \\(1.734 \\times 10^{-6}\\) \\(611.2\\) \\(7.57 \\times 10^{-2}\\) \\(2.02 \\times 10^{9}\\) \\(5\\) \\(1000.0\\) \\(9810\\) \\(1.501 \\times 10^{-3}\\) \\(1.501 \\times 10^{-6}\\) \\(872.6\\) \\(7.49 \\times 10^{-2}\\) \\(2.06 \\times 10^{9}\\) \\(10\\) \\(999.7\\) \\(9807\\) \\(1.310 \\times 10^{-3}\\) \\(1.311 \\times 10^{-6}\\) \\(1228\\) \\(7.42 \\times 10^{-2}\\) \\(2.10 \\times 10^{9}\\) \\(15\\) \\(999.1\\) \\(9801\\) \\(1.153 \\times 10^{-3}\\) \\(1.154 \\times 10^{-6}\\) \\(1706\\) \\(7.35 \\times 10^{-2}\\) \\(2.14 \\times 10^{9}\\) \\(20\\) \\(998.2\\) \\(9793\\) \\(1.021 \\times 10^{-3}\\) \\(1.023 \\times 10^{-6}\\) \\(2339\\) \\(7.27 \\times 10^{-2}\\) \\(2.18 \\times 10^{9}\\) \\(25\\) \\(997.1\\) \\(9781\\) \\(9.108 \\times 10^{-4}\\) \\(9.135 \\times 10^{-7}\\) \\(3170\\) \\(7.20 \\times 10^{-2}\\) \\(2.22 \\times 10^{9}\\) \\(30\\) \\(995.7\\) \\(9768\\) \\(8.174 \\times 10^{-4}\\) \\(8.210 \\times 10^{-7}\\) \\(4247\\) \\(7.12 \\times 10^{-2}\\) \\(2.25 \\times 10^{9}\\) \\(35\\) \\(994.1\\) \\(9752\\) \\(7.380 \\times 10^{-4}\\) \\(7.424 \\times 10^{-7}\\) \\(5629\\) \\(7.04 \\times 10^{-2}\\) \\(2.26 \\times 10^{9}\\) \\(40\\) \\(992.2\\) \\(9734\\) \\(6.699 \\times 10^{-4}\\) \\(6.751 \\times 10^{-7}\\) \\(7385\\) \\(6.96 \\times 10^{-2}\\) \\(2.28 \\times 10^{9}\\) \\(45\\) \\(990.2\\) \\(9714\\) \\(6.112 \\times 10^{-4}\\) \\(6.173 \\times 10^{-7}\\) \\(9595\\) \\(6.88 \\times 10^{-2}\\) \\(2.29 \\times 10^{9}\\) \\(50\\) \\(988.1\\) \\(9693\\) \\(5.605 \\times 10^{-4}\\) \\(5.672 \\times 10^{-7}\\) \\(1.235 \\times 10^{4}\\) \\(6.79 \\times 10^{-2}\\) \\(2.29 \\times 10^{9}\\) \\(55\\) \\(985.7\\) \\(9670\\) \\(5.162 \\times 10^{-4}\\) \\(5.237 \\times 10^{-7}\\) \\(1.576 \\times 10^{4}\\) \\(6.71 \\times 10^{-2}\\) \\(2.29 \\times 10^{9}\\) \\(60\\) \\(983.2\\) \\(9645\\) \\(4.776 \\times 10^{-4}\\) \\(4.857 \\times 10^{-7}\\) \\(1.995 \\times 10^{4}\\) \\(6.62 \\times 10^{-2}\\) \\(2.28 \\times 10^{9}\\) \\(65\\) \\(980.6\\) \\(9619\\) \\(4.435 \\times 10^{-4}\\) \\(4.523 \\times 10^{-7}\\) \\(2.504 \\times 10^{4}\\) \\(6.54 \\times 10^{-2}\\) \\(2.26 \\times 10^{9}\\) \\(70\\) \\(977.7\\) \\(9592\\) \\(4.135 \\times 10^{-4}\\) \\(4.229 \\times 10^{-7}\\) \\(3.120 \\times 10^{4}\\) \\(6.45 \\times 10^{-2}\\) \\(2.25 \\times 10^{9}\\) \\(75\\) \\(974.8\\) \\(9563\\) \\(3.869 \\times 10^{-4}\\) \\(3.969 \\times 10^{-7}\\) \\(3.860 \\times 10^{4}\\) \\(6.36 \\times 10^{-2}\\) \\(2.23 \\times 10^{9}\\) \\(80\\) \\(971.7\\) \\(9533\\) \\(3.631 \\times 10^{-4}\\) \\(3.737 \\times 10^{-7}\\) \\(4.742 \\times 10^{4}\\) \\(6.27 \\times 10^{-2}\\) \\(2.20 \\times 10^{9}\\) \\(85\\) \\(968.5\\) \\(9501\\) \\(3.419 \\times 10^{-4}\\) \\(3.530 \\times 10^{-7}\\) \\(5.787 \\times 10^{4}\\) \\(6.18 \\times 10^{-2}\\) \\(2.17 \\times 10^{9}\\) \\(90\\) \\(965.2\\) \\(9468\\) \\(3.229 \\times 10^{-4}\\) \\(3.345 \\times 10^{-7}\\) \\(7.018 \\times 10^{4}\\) \\(6.08 \\times 10^{-2}\\) \\(2.14 \\times 10^{9}\\) \\(95\\) \\(961.7\\) \\(9434\\) \\(3.057 \\times 10^{-4}\\) \\(3.179 \\times 10^{-7}\\) \\(8.461 \\times 10^{4}\\) \\(5.99 \\times 10^{-2}\\) \\(2.10 \\times 10^{9}\\) \\(100\\) \\(958.1\\) \\(9399\\) \\(2.902 \\times 10^{-4}\\) \\(3.029 \\times 10^{-7}\\) \\(1.014 \\times 10^{5}\\) \\(5.89 \\times 10^{-2}\\) \\(2.07 \\times 10^{9}\\) Table 2.2: Water properties in US units Temp Density Spec_Weight Viscosity Kinem_Visc Sat_VP Surf_Tens Bulk_Mod F slug ft-3 lbf ft-3 lbf s ft-2 ft2 s-1 lbf ft-2 lbf ft-1 lbf ft-2 \\(32\\) \\(1.9\\) \\(62.42\\) \\(3.621 \\times 10^{-5}\\) \\(1.873 \\times 10^{-5}\\) \\(12.77\\) \\(5.18 \\times 10^{-3}\\) \\(4.22 \\times 10^{7}\\) \\(42\\) \\(1.9\\) \\(62.43\\) \\(3.087 \\times 10^{-5}\\) \\(1.596 \\times 10^{-5}\\) \\(18.94\\) \\(5.13 \\times 10^{-3}\\) \\(4.31 \\times 10^{7}\\) \\(52\\) \\(1.9\\) \\(62.40\\) \\(2.658 \\times 10^{-5}\\) \\(1.375 \\times 10^{-5}\\) \\(27.62\\) \\(5.07 \\times 10^{-3}\\) \\(4.40 \\times 10^{7}\\) \\(62\\) \\(1.9\\) \\(62.36\\) \\(2.311 \\times 10^{-5}\\) \\(1.196 \\times 10^{-5}\\) \\(39.64\\) \\(5.02 \\times 10^{-3}\\) \\(4.50 \\times 10^{7}\\) \\(72\\) \\(1.9\\) \\(62.29\\) \\(2.026 \\times 10^{-5}\\) \\(1.050 \\times 10^{-5}\\) \\(56.00\\) \\(4.96 \\times 10^{-3}\\) \\(4.59 \\times 10^{7}\\) \\(82\\) \\(1.9\\) \\(62.20\\) \\(1.790 \\times 10^{-5}\\) \\(9.290 \\times 10^{-6}\\) \\(77.99\\) \\(4.90 \\times 10^{-3}\\) \\(4.67 \\times 10^{7}\\) \\(92\\) \\(1.9\\) \\(62.09\\) \\(1.594 \\times 10^{-5}\\) \\(8.286 \\times 10^{-6}\\) \\(107.2\\) \\(4.84 \\times 10^{-3}\\) \\(4.72 \\times 10^{7}\\) \\(102\\) \\(1.9\\) \\(61.97\\) \\(1.429 \\times 10^{-5}\\) \\(7.443 \\times 10^{-6}\\) \\(145.3\\) \\(4.78 \\times 10^{-3}\\) \\(4.75 \\times 10^{7}\\) \\(112\\) \\(1.9\\) \\(61.83\\) \\(1.289 \\times 10^{-5}\\) \\(6.732 \\times 10^{-6}\\) \\(194.7\\) \\(4.72 \\times 10^{-3}\\) \\(4.77 \\times 10^{7}\\) \\(122\\) \\(1.9\\) \\(61.68\\) \\(1.171 \\times 10^{-5}\\) \\(6.126 \\times 10^{-6}\\) \\(258\\) \\(4.66 \\times 10^{-3}\\) \\(4.78 \\times 10^{7}\\) \\(132\\) \\(1.9\\) \\(61.52\\) \\(1.069 \\times 10^{-5}\\) \\(5.608 \\times 10^{-6}\\) \\(338.1\\) \\(4.59 \\times 10^{-3}\\) \\(4.77 \\times 10^{7}\\) \\(142\\) \\(1.9\\) \\(61.34\\) \\(9.808 \\times 10^{-6}\\) \\(5.162 \\times 10^{-6}\\) \\(438.5\\) \\(4.53 \\times 10^{-3}\\) \\(4.75 \\times 10^{7}\\) \\(152\\) \\(1.9\\) \\(61.16\\) \\(9.046 \\times 10^{-6}\\) \\(4.775 \\times 10^{-6}\\) \\(563.2\\) \\(4.46 \\times 10^{-3}\\) \\(4.72 \\times 10^{7}\\) \\(162\\) \\(1.9\\) \\(60.96\\) \\(8.381 \\times 10^{-6}\\) \\(4.438 \\times 10^{-6}\\) \\(716.9\\) \\(4.39 \\times 10^{-3}\\) \\(4.68 \\times 10^{7}\\) \\(172\\) \\(1.9\\) \\(60.75\\) \\(7.797 \\times 10^{-6}\\) \\(4.144 \\times 10^{-6}\\) \\(904.5\\) \\(4.32 \\times 10^{-3}\\) \\(4.62 \\times 10^{7}\\) \\(182\\) \\(1.9\\) \\(60.53\\) \\(7.283 \\times 10^{-6}\\) \\(3.884 \\times 10^{-6}\\) \\(1132\\) \\(4.25 \\times 10^{-3}\\) \\(4.55 \\times 10^{7}\\) \\(192\\) \\(1.9\\) \\(60.30\\) \\(6.828 \\times 10^{-6}\\) \\(3.655 \\times 10^{-6}\\) \\(1405\\) \\(4.18 \\times 10^{-3}\\) \\(4.48 \\times 10^{7}\\) \\(202\\) \\(1.9\\) \\(60.06\\) \\(6.423 \\times 10^{-6}\\) \\(3.452 \\times 10^{-6}\\) \\(1731\\) \\(4.11 \\times 10^{-3}\\) \\(4.40 \\times 10^{7}\\) \\(212\\) \\(1.9\\) \\(59.81\\) \\(6.061 \\times 10^{-6}\\) \\(3.271 \\times 10^{-6}\\) \\(2118\\) \\(4.04 \\times 10^{-3}\\) \\(4.32 \\times 10^{7}\\) What follows is a brief discussion of some of these properties, and how they can be applied in R. All of the properties shown in the tables above are produced using the hydraulics R package. The documentation for that package provides details on its use. The water property functions in the hydraulics package can be called using the ret_units input to allow it to return an object of class units, as designated by the package units. This enables capabilities for new units to be deduced as operations are performed on the values. Concise examples are in the vignettes for the ‘units’ package. 2.1 Properties important for water standing still An intrinsic property of water is its mass. In the presence of gravity, it exerts a weight on its surroundings. Forces caused by the weight of water enter design in many ways. Example 2.1 uses water mass and weight in a calculation. Example 2.1 Determine the tension in the 8 mm diameter rope holding a bucket containing 12 liters of water. Ignore the weight of the bucket. Assume a water temperature of 20 \\(^\\circ\\)C. rho = hydraulics::dens(T = 20, units = 'SI', ret_units = TRUE) #Water density: rho #> 998.2336 [kg/m^3] #Find mass by multiplying by volume vol <- units::set_units(12, liter) m <- rho * vol #Convert mass to weight in Newtons g <- units::set_units(9.81, m/s^2) w <- units::set_units(m*g, "N") #Divide by cross-sectional area of the rope to obtain tension area <- units::set_units(pi/4 * 8^2, mm^2) tension <- w/area #Express the result in Pascals units::set_units(tension, Pa) #> 2337828 [Pa] #For demonstration, convert to psi units::set_units(tension, psi) #> 339.0733 [psi] For example 2.1 units could have been tracked manually throughout, as if done by hand. The convenience of using the units package allows conversions that can be used to check hand calculations. Water expands as it is heated, which is part of what is driving sea-level rise globally. Approximately 90% of excess energy caused by global warming pollution is absorbed by oceans, with most of that occurring in the upper ocean: 0-700 m of depth (Fox-Kemper et al., 2021). Example 2.2 uses water mass and weight in a calculation. Example 2.2 Assume the ocean is made of fresh water (the change in density of sea water with temperature is close enough to fresh water for this illustration). Assume a 700 m thick upper layer of the ocean. Assuming this upper layer has an initial temperature of 15 \\(^\\circ\\)C and calculate the change in mean sea level due to a 2 \\(^\\circ\\)C rise in temperature of this upper layer. It may help to consider a single 1m x 1m column of water with h=700 m high under original conditions. Since mass is conserved, and mass = volume x density, this is simple: \\[LWh_1\\cdot\\rho_1=LWh_2\\cdot\\rho_2\\] or \\[h_2=h_1\\frac{\\rho_1}{\\rho_2}\\] rho1 = hydraulics::dens(T = 15, units = 'SI') rho2 = hydraulics::dens(T = 17, units = 'SI') h2 = 700 * (rho1/rho2) cat(sprintf("Change in sea level = %.3f m\\n", h2-700)) #> Change in sea level = 0.227 m The bulk modulus, Ev, relates the change in specific volume to the change in pressure, and defined as in Equation (2.1). \\[\\begin{equation} E_v=-v\\frac{dp}{dv} \\tag{2.1} \\end{equation}\\] which can be discretized: \\[\\begin{equation} \\frac{v_2-v_1}{v_1}=-\\frac{p_2-p_1}{E_v} \\tag{2.2} \\end{equation}\\] where \\(v\\) is the specific volume (\\(v=\\frac{1}{\\rho}\\)) and \\(p\\) is pressure. Example 2.3 shows one application of this. Example 2.3 A barrel of water has an initial temperature of 15 \\(^\\circ\\)C at atmospheric pressure (p=0 Pa gage). Plot the pressure the barrel must exert to have no change in volume as the water warms to 20 \\(^\\circ\\)C. Here essentially the larger specific volume (at a higher temperature) is then compressed by \\({\\Delta}P\\) to return the volume to its original value. Thus, subscript 1 indicates the warmer condition, and subscript 2 the original at 15 \\(^\\circ\\)C. dp <- function(tmp) { rho2 <- hydraulics::dens(T = 15, units = 'SI') rho1 <- hydraulics::dens(T = tmp, units = 'SI') Ev <- hydraulics::Ev(T = tmp, units = 'SI') return((-((1/rho2) - (1/rho1))/(1/rho1))*Ev) } temps <- seq(from=15, to=20, by=1) plot(temps,dp(temps), xlab="Temperature, C", ylab="Pressure increase, Pa", type="b") Figure 2.1: Approximate change in pressure as water temperature increases. These very high pressures required to compress water, even by a small fraction, validates the ordinary assumption that water can be considered incompressible in most applications. It should be noted that the Ev values produced by the hydraulics package only vary with temperature, and assume standard atmospheric pressure; in reality, Ev values increase with increasing pressure so the values plotted here serve only as a demonstration and underestimate the pressure increase. 2.2 Properties important for moving water When describing the behavior of moving water in civil engineering infrastructure like pipes and channels there are three primary water properties used used in calculations, all of which vary with water temperature: density (\\(\\rho\\)), dynamic viscosity(\\(\\mu\\)), and kinematic viscosity(\\(\\nu\\)), which are related by Equation (2.3). \\[\\begin{equation} \\nu=\\frac{\\mu}{\\rho} \\tag{2.3} \\end{equation}\\] Viscosity is caused by interaction of the fluid molecules as they are subjected to a shearing force. This is often illustrated by a conceptual sketch of two parallel plates, one fixed and one moving at a constant speed, with a fluid in between. Perhaps more intuitively, a smore can be used. If the velocity of the marshmallow filling varies linearly, it will be stationary (V=0) at the bottom and moving at the same velocity as the upper cracker at the top (V=U). The force needed to move the upper cracker can be calculated using Equation (2.4) \\[\\begin{equation} F=A{\\mu}\\frac{dV}{dy} \\tag{2.4} \\end{equation}\\] where y is the distance between the crackers and A is the cross-sectional area of a cracker. Equation (2.4) is often written in terms of shear stress \\({\\tau}\\) as in Equation (2.5) \\[\\begin{equation} \\frac{F}{A}={\\tau}={\\mu}\\frac{dV}{dy} \\tag{2.5} \\end{equation}\\] The following demonstrates a use of these relationships. Example 2.4 Determine force required to slide the top cracker at 1 cm/s with a thickness of marshmallow of 0.5 cm. The cross-sectional area of the crackers is 10 cm\\(^2\\). The viscosity (dynamic viscosity, as can be discerned by the units) of marshmallow is about 0.1 Pa\\(\\cdot\\)s. #Assign variables A <- units::set_units(10, cm^2) U <- units::set_units(1, cm/s) y <- units::set_units(0.5, cm) mu <- units::set_units(0.1, Pa*s) #Find shear stress tau <- mu * U / y tau #> 0.2 [Pa] #Since stress is F/A, multiply tau by A to find F, convert to Newtons and pounds units::set_units(tau*A, N) #> 2e-04 [N] units::set_units(tau*A, lbf) #> 4.496179e-05 [lbf] Water is less viscous than marshmallow, so viscosity for water has much lower values than in the example. Values for water can be obtained using the hydraulics R package for calculations, using the dens, dvisc, and kvisc. All of the water property functions can accept a list of input temperature values, enabling visualization of a property with varying water temperature, as shown in Figure 2.2. Ts <- seq(0, 100, 10) nus <- hydraulics::kvisc(T = Ts, units = 'SI') xlbl <- expression("Temperature, " (degree*C)) ylbl <- expression("Kinematic viscosity," ~nu~ (m^{2}/s)) par(cex=0.8, mgp = c(2,0.7,0)) plot(Ts, nus, xlab = xlbl, ylab = ylbl, type="l") Figure 2.2: Variation of kinematic viscosity with temperature. 2.3 Atmosperic Properties Since water interacts with the atmosphere, through processes like evaporation and condensation, some basic properties of air are helpful. Selected characteristics of the standard atmosphere, as determined by the International Civil Aviation Organization (ICAO), are included in the hydraulics package. Three functions atmpres, atmdens, and atmtemp return different properties of the standard atmosphere, which vary with altitude. These are summarized in Table 2.3 for SI units and Table 2.4 for US (or Eng) units. Table 2.3: ICAO standard atmospheric properties in SI units Altitude Temp Pressure Density m C Pa kg m-3 0 15.00 101325.0 1.22500 1000 8.50 89876.3 1.11166 2000 2.00 79501.4 1.00655 3000 -4.49 70121.1 0.90925 4000 -10.98 61660.4 0.81935 5000 -17.47 54048.2 0.73643 6000 -23.96 47217.6 0.66011 7000 -30.45 41105.2 0.59002 8000 -36.93 35651.5 0.52579 9000 -43.42 30800.6 0.46706 10000 -49.90 26499.8 0.41351 11000 -56.38 22699.8 0.36480 12000 -62.85 19354.6 0.32062 13000 -69.33 16421.2 0.28067 14000 -75.80 13859.4 0.24465 15000 -82.27 11631.9 0.21229 Table 2.4: ICAO standard atmospheric properties in US units Altitude Temp Pressure Density ft F lbf ft-2 slug ft-3 0 59.00 2116.2 0.00237 5000 41.17 1760.9 0.00205 10000 23.36 1455.6 0.00175 15000 5.55 1194.8 0.00149 20000 -12.25 973.3 0.00127 25000 -30.05 786.3 0.00107 30000 -47.83 629.7 0.00089 35000 -65.61 499.3 0.00074 40000 -83.37 391.8 0.00061 45000 -101.13 303.9 0.00049 50000 -118.88 232.7 0.00040 As with water property functions, the data in the table can be extracted using individual commands for use in calculations. All atmospheric functions have input arguments of altitude (ft or m), unit system (SI or Eng), and whether or not units should be returned. hydraulics::atmpres(alt = 3000, units = "SI", ret_units = TRUE) #> 70121.14 [Pa] 2.3.1 Ideal gas law Because air is compressible, its density changes with pressure, and the temperature responds to compression. These are related through the ideal gas law, Equation (2.6) \\[\\begin{equation} \\begin{split} p={\\rho}RT\\\\ p{\\forall}=mRT \\end{split} \\tag{2.6} \\end{equation}\\] where \\(p\\) is absolute pressure, \\(\\forall\\) is the volume, \\(R\\) is the gas constant, \\(T\\) is absolute temperature, and \\(m\\) is the mass of the gas. When air changes its condition between two states, the ideal gas law can be restated as Equation (2.7) \\[\\begin{equation} \\frac{p_1{\\forall_1}}{T_1}=\\frac{p_2{\\forall_2}}{T_2} \\tag{2.7} \\end{equation}\\] Two convenient forms of Equation (2.7) apply for specific conditions. If mass is conserved, and conditions are isothermal, m, R, T are constant (isothermal conditions): \\[\\begin{equation} p_1{\\forall_1}=p_2{\\forall_2} \\tag{2.8} \\end{equation}\\] If mass is conserved and temperature changes adiabatically (meaning no heat is exchanged with surroundings) and reversibly, these are isentropic conditions, governed by Equations (2.9). \\[\\begin{equation} \\begin{split} p_1{\\forall_1}^k=p_2{\\forall_2}^k\\\\ \\frac{T_2}{T_1}=\\left(\\frac{p_2}{p_1}\\right)^{\\frac{k-1}{k}} \\end{split} \\tag{2.9} \\end{equation}\\] Properties of air used in these formulations of the ideal gas law are shown in Table 2.5. Table 2.5: Air properties at standard sea-level atmospheric pressure Gas Constant, R Sp. Heat, cp Sp. Heat, cv Sp. Heat Ratio, k 1715 ft lbf degR-1 slug-1 6000 ft lbf degR-1 slug-1 4285 ft lbf degR-1 slug-1 1.4 287 m N K-1 kg-1 1003 m N K-1 kg-1 717 m N K-1 kg-1 1.4 "],["hydrostatics---forces-exerted-by-water-bodies.html", "Chapter 3 Hydrostatics - forces exerted by water bodies 3.1 Pressure and force 3.2 Force on a plane area 3.3 Forces on curved surfaces", " Chapter 3 Hydrostatics - forces exerted by water bodies When water is motionless its weight exerts a pressure on surfaces with which it is in contact. The force is function of the density of the fluid and the depth. Figure 3.1: The Clywedog dam by Nigel Brown, CC BY-SA 2.0, via Wikimedia Commons 3.1 Pressure and force A consideration of all of the forces acting on a particle in a fluid in equilibrium produces Equation (3.1). \\[\\begin{equation} \\frac{dp}{dz}=-{\\gamma} \\tag{3.1} \\end{equation}\\] where \\(p\\) is pressure (\\(p=F/A\\)), \\(z\\) is height measured upward from a datum, and \\({\\gamma}\\) is the specific weight of the fluid (\\(\\gamma={\\rho}g\\)). Rewritten using depth (downward from the water surface), \\(h\\), produces Equation (3.2). \\[\\begin{equation} h=\\frac{p}{\\gamma} \\tag{3.2} \\end{equation}\\] Example 3.1 Find the force on the bottom of a 0.4 m diameter barrel filled with (20 \\(^\\circ\\)C) water for barrel heights from 0.5 m to 1.5 m. area <- pi/4*0.4^2 gamma <- hydraulics::specwt(T = 20, units = 'SI') heights <- seq(from=0.5, to=1.5, by=0.05) pressures <- gamma * heights forces <- pressures * area plot(forces,heights, xlab="Total force on barrel bottom, N", ylab="Depth of water, m", type="l") grid() Figure 3.2: Force on barrel bottom. The linear relationship is what was expected. 3.2 Force on a plane area For a submerged flat surface, the magnitude of the hydrostatic force can be found using Equation (3.3). \\[\\begin{equation} F={\\gamma}y_c\\sin{\\theta}A={\\gamma}h_cA \\tag{3.3} \\end{equation}\\] The force is located as defined by Equation (3.4). \\[\\begin{equation} y_p=y_c+\\frac{I_c}{y_cA} \\tag{3.4} \\end{equation}\\] The variables correspond to the definitions in Figure 3.3. Figure 3.3: Forces on a plane area, by Ertunc, CC BY-SA 4.0, via Wikimedia Commons The location of the centroid and the moment of inertia, \\(I_c\\) for some common shapes are shown in Figure 3.4 (Moore, J. et al., 2022). The variables correspond to the definitions in Figure 3.4. Figure 3.4: Centroids and moments of inertia for common shapes Example 3.2 A 6 m long hinged gate with a width of 1 m (into the paper) is at an angle of 60o and is held in place by a horizontal cable. Plot the tension in the cable, \\(T\\), as the water depth, \\(h\\), varies from 0.1 to 4 m in depth. Ignore the weight of the gate. Figure 3.5: Reservoir with hinged gate (Olivier Cleyne, CC0 license, via Wikimedia Commons) The surface area of the gate that is wetted is \\(A=L{\\cdot}w=\\frac{h{\\cdot}w}{\\sin(60)}\\). The wetted area is rectangular, so \\(h_c=\\frac{h}{2}\\). The magnitude of the force uses (3.3): \\[F={\\gamma}h_cA={\\gamma}\\frac{h}{2}\\frac{h{\\cdot}w}{\\sin(60)}\\] The distance along the plane from the water surface to the centroid of the wetted area is \\(y_c=\\frac{1}{2}\\frac{h}{\\sin(60)}\\). The moment of inertia for the rectangular wetted area is \\(I_c=\\frac{1}{12}w\\left(\\frac{h}{\\sin(60)}\\right)^3\\). Taking moments about the hinge at the bottom of the gate yields \\(T{\\cdot}6\\sin(60)-F{\\cdot}\\left(\\frac{h}{\\sin(60)}-y_p\\right)=0\\) or \\(T=\\frac{F}{6\\cdot\\sin(60)}\\left(\\frac{h}{\\sin(60)}-y_p\\right)\\) These equations can be used in R to create the desired plot. gate_length <- 6.0 w <- 1.0 theta <- 60*pi/180 #convert angle to radians h <- seq(from=0.1, to=4.1, by=0.25) gamma <- hydraulics::specwt(T = 20, units = 'SI') area <- h*w/sin(theta) hc <- h/2 Force <- gamma*hc*area yc <- (1/2)*h/(sin(theta)) Ic <- (1/12)*w*(h/sin(theta))^3 yp <- yc + (Ic/(yc*area)) Tension <- Force/(gate_length*sin(theta)) * (h/sin(theta) - yp) plot(Tension,h, xlab="Cable tension, N", ylab="Depth of water, m", type="l") grid() 3.3 Forces on curved surfaces For forces on curved surfaces, the procedure is often to calculate the vertical, \\(F_V\\), and horizontal, \\(F_H\\), hydrostatic forces separately. \\(F_H\\) is simpler, since it is the horizontal force on a (plane) vertical projection of the submerged surface, so the methods of Section 3.2 apply. The vertical component, \\(F_V\\), for a submerged surface with water above has a magnitude of the weight of the water above it, which acts through the center of volume. For a curved surface with water below it the magnitude of \\(F_V\\) is the volume of the ‘mising’ water that would be above it, and the force acts upward. Figure 3.6: Forces on curved surfaces, by Ertunc, CC BY-SA 4.0, via Wikimedia Commons A classic example of a curved surface in civil engineering hydraulics is a radial (or Tainter) gate, as in Figure 3.7. Figure 3.7: Radial gates on the Rogue River, OR. To simplify the geometry, a problem is presented in Example 3.3 where the gate meets the base at a horizontal angle. Example 3.3 A radial gate with radius R=6 m and a width of 1 m (into the paper) controls water. Find the horizontal and vertical hydrostatic forces for depths, \\(h\\), from 0 to 6 m. The horizontal hydrostatic force is that acting on a rectangle of height \\(h\\) and width \\(w\\): \\[F_H=\\frac{1}{2}{\\gamma}h^2w\\] which acts at a height of \\(y_c=\\frac{h}{3}\\) from the bottom of the gate. The vertical component has a magnitude equal to the weight of the ‘missing’ water indicated on the sketch. The calculation of its volume requires the area of a circular sector minus the area of a triangle above it. The angle, \\(\\theta\\) is found using geometry to be \\({\\theta}=cos^{-1}\\left(\\frac{R-h}{R}\\right)\\). Using the equations for areas of these two components as in Figure 3.4, the following is obtained: \\[F_V={\\gamma}w\\left(\\frac{R^2\\theta}{2}-\\frac{R-h}{2}R\\sin{\\theta}\\right)\\] The line of action of \\(F_V\\) can be determined by combining the components for centroids of the composite shapes, again following Figure 3.4. Because the line of action of the resultant force on a curcular gate must pass through the center of the circle (since hydrostatic forces always act normal to the gate), the moments about the hinge of \\(F_H\\) and \\(F_V\\) must equal zero. \\[\\sum{M}_{hinge}=0=F_H\\left(R-h/3\\right)-F_V{\\cdot}x_c\\] This produces the equation: \\[x_c=\\frac{F_H\\left(R-h/3\\right)}{F_V}\\] These equations can be solved in many ways, such as the following. R <- units::set_units(6.0, m) w <- units::set_units(1.0, m) gamma <- hydraulics::specwt(T = 20, units = 'SI', ret_units = TRUE) h <- units::set_units(seq(from=0, to=6, by=1), m) #angle in radians throughout, units not needed theta <- units::drop_units(acos((R-h)/R)) area <- h*w/sin(theta) Fh <- (1/2)*gamma*h^2*w yc <- h/3 Fv <- gamma*w*((R^2*theta)/2 - ((R-h)/2) * R*sin(theta)) xc <- Fh*(R-h/3)/Fv Ftotal <- sqrt(Fh^2+Fv^2) tibble::tibble(h=h, Fh=Fh, yc=yc, Fv=Fv, xc=xc, Ftotal=Ftotal) #> # A tibble: 7 × 6 #> h Fh yc Fv xc Ftotal #> [m] [N] [m] [N] [m] [N] #> 1 0 0 0 0 NaN 0 #> 2 1 4896. 0.333 22041. 1.26 22578. #> 3 2 19585. 0.667 60665. 1.72 63748. #> 4 3 44067. 1 108261. 2.04 116886. #> 5 4 78341. 1.33 161583. 2.26 179573. #> 6 5 122408. 1.67 218398. 2.43 250363. #> 7 6 176268. 2 276881. 2.55 328228. "],["water-flowing-in-pipes-energy-losses.html", "Chapter 4 Water flowing in pipes: energy losses 4.1 Important dimensionless quantity 4.2 Friction Loss in Circular Pipes 4.3 Solving Pipe friction problems 4.4 Solving for head loss (Type 1 problems) 4.5 Solving for Flow or Velocity (Type 2 problems) 4.6 Solving for pipe diameter, D (Type 3 problems) 4.7 Parallel pipes: solving a system of equations 4.8 Simple pipe networks: the Hardy-Cross method", " Chapter 4 Water flowing in pipes: energy losses Flow in civil engineering infrastructure is usually either in pipes, where it is not exposed to the atmosphere and flows under pressure, or open channels (canals, rivers, etc.). this chapter is concerned only with water flow in pipes. Once water begins to move engineering problems often need to relate the flow rate to the energy dissipated. To accomplish this, the flow needs to be classified using dimensionless constants since energy dissipation varies with the flow conditions. 4.1 Important dimensionless quantity As water begins to move, the characteristics are described by two quantities in engineering hydraulics: the Reynolds number, Re and the Froude number Fr. The latter is more important for open channel flow and will be discussed in that chapter. Reynolds number describes the turbulence of the flow, defined by the ratio of kinematic forces, expressed by velocity V and a characteristic length such as pipe diameter, D, to viscous forces as expressed by the kinematic viscosity \\(\\nu\\), as in Equation (4.1) \\[\\begin{equation} Re=\\frac{VD}{\\nu} \\tag{4.1} \\end{equation}\\] For open channels the characteristic length is the hydraulic depth, the area of flow divided by the top width. For adequately turbulent conditions to exists, Reynolds numbers should exceed 4000 for full pipes, and 2000 for open channels. 4.2 Friction Loss in Circular Pipes The energy at any point along a pipe containing flowing water is often described by the energy per unit weight, or energy head, E, as in Equation (4.2) \\[\\begin{equation} E = z+\\frac{P}{\\gamma}+\\alpha\\frac{V^2}{2g} \\tag{4.2} \\end{equation}\\] where P is the pressure, \\(\\gamma=\\rho g\\) is the specific weight of water, z is the elevation of the point, V is the average velocity, and each term has units of length. \\(\\alpha\\) is a kinetic energy adjustment factor to account for non-uniform velocity distribution across the cross-section. \\(\\alpha\\) is typically assumed to be 1.0 for turbulent flow in circular pipes because the value is close to 1.0 and \\(\\frac{V^2}{2g}\\) (the velocity head) tends to be small in relation to other terms in the equation. Some applications where velocity varies widely across a cross-section, such as a river channel with flow in a main channel and a flood plain, will need to account for values of \\(\\alpha\\) other than one. As water flows through a pipe energy is lost due to friction with the pipe walls and local disturbances (minor losses). The energy loss between two sections is expressed as \\({E_1} - {h_l} = {E_2}\\). When pipes are long, with \\(\\frac{L}{D}>1000\\), friction losses dominate the energy loss on the system, and the head loss, \\(h_l\\), is calculated as the head loss due to friction, \\(h_f\\). This energy head loss due to friction with the walls of the pipe is described by the Darcy-Weisbach equation, which estimates the energy loss per unit weight, or head loss \\({h_f}\\), which has units of length. For circular pipes it is expressed by Equation (4.3) \\[\\begin{equation} h_f = \\frac{fL}{D}\\frac{V^2}{2g} = \\frac{8fL}{\\pi^{2}gD^{5}}Q^{2} \\tag{4.3} \\end{equation}\\] In equation (4.3) f is the friction factor, typically calculated with the Colebrook equation (Equation (4.4)). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{2.51}{Re\\sqrt{f}}\\right) \\tag{4.4} \\end{equation}\\] In Equation (4.4) \\(k_s\\) is the absolute roughness of the pipe wall. There are close approximations to the Colebrook equation that have an explicit form to facilitate hand-calculations, but when using R or other computational tools there is no need to use approximations. 4.3 Solving Pipe friction problems As water flows through a pipe energy is lost due to friction with the pipe walls and local disturbances (minor losses). For this example assume minor losses are negligible. The energy head loss due to friction with the walls of the pipe is described by the Darcy-Weisbach equation (Equation ((4.3))), which estimates the energy loss per unit weight, or head loss hf, which has units of length. The Colebrook equation (Equation (4.4)) is commonly plotted as a Moody diagram to illustrate the relationships between the variables, in Figure 4.1. hydraulics::moody() Figure 4.1: Moody Diagram Because of the form of the equations, they can sometimes be a challenge to solve, especially by hand. It can help to classify the types of problems based on what variable is unknown. These are summarized in Table 4.1. Table 4.1: Types of Energy Loss Problems in Pipe Flow Type Known Unknown 1 Q (or V), D, ks, L hL 2 hL, D, ks, L Q (or V) 3 hL, Q (or V), ks, L D When solving by hand the types in Table 4.1 become progressively more difficult, but when using solvers the difference in complexity is subtle. 4.4 Solving for head loss (Type 1 problems) The simplest pipe flow problem to solve is when the unknown is head loss, hf (equivalent to hL in the absence of minor losses), since all variables on the right side of the Darcy-Weisbach equation are known, except f. 4.4.1 Solving for head loss by manual iteration While all unknowns are on the right side of Equation (4.3), iteration is still required because the Colebrook equation, Equation (4.4), cannot be solved explicitly for f. An illustration of solving this type of problem is shown in Example 4.1. Example 4.1 Find the head loss (due to friction) of 20\\(^\\circ\\)C water in a pipe with the following characteristics: Q=0.416 m\\(^3\\)/s, L=100m, D=0.5m, ks=0.046mm. Since the water temperature is known, first find the kinematic viscocity of water, \\({\\nu}\\), since it is needed for the Reynolds number. This can be obtained from a table in a reference or using software. Here we will use the hydraulics R package. nu <- hydraulics::kvisc(T=20, units="SI") cat(sprintf("Kinematic viscosity = %.3e m2/s\\n", nu)) #> Kinematic viscosity = 1.023e-06 m2/s We will need the Reynolds Number to use the Colebrook equation, and that can be calculated since Q is known. This can be accomplished with a calculator, or using other software (R is used here): Q <- 0.416 D <- 0.5 A <- (3.14/4)*D^2 V <- Q/A Re <- V*D/nu cat(sprintf("Velocity = %.3f m/s, Re = %.3e\\n", V, Re)) #> Velocity = 2.120 m/s, Re = 1.036e+06 Now the only unknown in the Colebrook equation is f, but unfortunately f appears on both sides of the equation. To begin the iterative process, a first guess at f is needed. A reasonable value to use is the minimum f value, fmin, given the known \\(\\frac{k_s}{D}=\\frac{0.046}{500}=0.000092=9.2\\cdot 10^{-5}\\). Reading horizontally from the right vertical axis to the left on the Moody diagram provides a value for \\(f_{min}\\approx 0.012\\). Numerically, it can be seen that f is independent of Re for large values of Re. When Re is large the second term of the Colebrook equation becomes small and the equation approaches Equation (4.5). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7}\\right) \\tag{4.5} \\end{equation}\\] This independence of f with varying Re values is visible in the Moody Diagram, Figure 4.1, toward the right, where the lines become horizontal. Using Equation (4.5) the same value of fmin=0.012 is obtained, since the colebrook equation defines the Moody diagram. Iteration 1: Using f=0.012 the right side of the Colebrook equation is 8.656. the next estimate for f is then obtained by \\(\\frac{1}{\\sqrt{f}}=8.656\\) so f=0.0133. Iteration 2: Using the new value of f=0.0133 in the right side of the Colebrook equation produces 8.677. A new value for f is obtained by \\(\\frac{1}{\\sqrt{f}}=8.677\\) so f=0.0133. The solution has converged! Using the new value of f, the value for hf is calculated: \\[h_f = \\frac{8fL}{\\pi^{2}gD^{5}}Q^{2}=\\frac{8(0.0133)(100)}{\\pi^{2}(9.81)(0.5)^{5}}(0.416)^{2}=0.061 m\\] 4.4.2 Solving for headloss using an empirical approximation A shortcut that can be used to avoid iterating to find the friction factor is to use an approximation to the Colebrook equation that can be solved explicitly. One example is the Haaland equation (4.6) (Haaland, 1983). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -1.8\\log\\left(\\left(\\frac{\\frac{k_s}{D}}{3.7}\\right)^{1.11}+\\frac{6.9}{Re}\\right) \\tag{4.6} \\end{equation}\\] For ordinary pipe flow conditions in water pipes, Equation (4.6) is accurate to within 1.5% of the Colebrook equation. There are many other empirical equations, one common one being that of Swamee and Jain (Swamee & Jain, 1976), shown in Equation (4.7). \\[\\begin{equation} \\frac{1}{\\sqrt{f}} = -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7}+\\frac{5.74}{Re^{0.9}}\\right) \\tag{4.7} \\end{equation}\\] These approximations are useful for solving problems by hand or in spreadsheets, and their accuracy is generally within the uncertainty of other input variables like the absolute roughness. 4.4.3 Solving for head loss using an equation solver Rather than use an empirical approximation (as in Section 4.4.2) to the Colebrook equation, it is straightforward to apply an equation solver to use the Colebrook equation directly. This is demonstrated in Example 4.2. Example 4.2 Find the friction factor for the same conditions as Example 4.1: D=0.5m, ks=0.046mm, and Re=1.036e+06. First, rearrange the Colebrook equation so all terms are on one side of the equation, as in Equation (4.8). \\[\\begin{equation} -2\\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{2.51}{Re\\sqrt{f}}\\right) - \\frac{1}{\\sqrt{f}}=0 \\tag{4.8} \\end{equation}\\] Create a function using whatever equation solving platform you prefer. Here the R software is used: colebrk <- function(f,ks,D,Re) -2.0*log10((ks/D)/3.7 + 2.51/(Re*(f^0.5)))-1/(f^0.5) Find the root of the function (where it equals zero), specifying a reasonable range for f values using the interval argument: f <- uniroot(colebrk, interval = c(0.008,0.1), ks=0.000046, D=0.5, Re=1.036e+06)$root cat(sprintf("f = %.4f\\n", f)) #> f = 0.0133 The same value for hf as above results. 4.4.4 Solving for head loss using an R package Equation solvers for implicit equations, like in Section 4.4.3, are built into the R package hydraulics. that can be applied directly, without writing a separate function. Example 4.3 Using the hydraulics R package, find the friction factor and head loss for the same conditions as Example 4.2: Q=0.416 m3/s, L=100 m, D=0.5m, ks=0.046mm, and nu = 1.023053e-06 m2/s. ans <- hydraulics::darcyweisbach(Q = 0.416,D = 0.5, L = 100, ks = 0.000046, nu = 1.023053e-06, units = c("SI")) #> hf missing: solving a Type 1 problem cat(sprintf("Reynolds no: %.0f\\nFriction Fact: %.4f\\nHead Loss: %.2f ft\\n", ans$Re, ans$f, ans$hf)) #> Reynolds no: 1035465 #> Friction Fact: 0.0133 #> Head Loss: 0.61 ft If only the f value is needed, the colebrook function can be used. f <- hydraulics::colebrook(ks=0.000046, V= 2.120, D=0.5, nu=1.023e-06) cat(sprintf("f = %.4f\\n", f)) #> f = 0.0133 Notice that the colebrook function needs input in dimensionally consistent units. Because it is dimensionally homogeneous and the input dimensions are consistent, the unit system does not need to be defined like with many other functions in the hydraulics package. 4.5 Solving for Flow or Velocity (Type 2 problems) When flow (Q) or velocity (V) is unknown, the Reynolds number cannot be determined, complicating the solution of the Colebrook equation. As with Secion 4.4 there are several strategies to solving these, ranging from iterative manual calculations to using software packages. For Type 2 problems, since D is known, once either V or Q is known, the other is known, since \\(Q=V{\\cdot}A=V\\frac{\\pi}{4}D^2\\). 4.5.1 Solving for Q (or V) using manual iteration Solving a Type 2 problem can be done with manual iterations, as demonstrated in Example 4.4. Example 4.4 find the flow rate, Q of 20oC water in a pipe with the following characteristics: hf=0.6m, L=100m, D=0.5m, ks=0.046mm. First rearrange the Darcy-Weisbach equation to express V as a function of f, substituting all of the known quantities: \\[V = \\sqrt{\\frac{h_f}{L}\\frac{2gD}{f}}=\\frac{0.243}{\\sqrt{f}}\\] That provides one equation relating V and f. The second equation relating V and f is one of the friction factor equations, such as the Colebrook equation or its graphic representation in the Moody diagram. An initial guess at a value for f is obtained using fmin=0.012 as was done in Example 4.1. Iteration 1: \\(V=\\frac{0.243}{\\sqrt{0.012}}=2.218\\); \\(Re=\\frac{2.218\\cdot 0.5}{1.023e-06}=1.084 \\cdot 10^6\\). A new f value is obtained from the Moody diagram or an equation using the new Re value: \\(f \\approx 0.0131\\) Iteration 2: \\(V=\\frac{0.243}{\\sqrt{0.0131}}=2.123\\); \\(Re=\\frac{2.123\\cdot 0.5}{1.023e-06}=1.038 \\cdot 10^6\\). A new f estimate: \\(f \\approx 0.0132\\) The function converges very quickly if a reasonable first guess is made. Using V=2.12 m/s, \\(Q = AV = \\left(\\frac{\\pi}{4}\\right)D^2V=0.416 m^3/s\\) 4.5.2 Solving for Q Using an Explicit Equation Solving Type 2 problems using iteration is not necessary, since an explicit equation based on the Colebrook equation can be derived. Solving the Darcy Weisbach equation for \\(\\frac{1}{\\sqrt{f}}\\) and substituting that into the Colebrook equation produces Equation (4.9). \\[\\begin{equation} Q=-2.221D^2\\sqrt{\\frac{gDh_f}{L}} \\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{1.784\\nu}{D}\\sqrt{\\frac{L}{gDh_f}}\\right) \\tag{4.9} \\end{equation}\\] This can be solved explicitly for Q=0.413 m3/s. 4.5.3 Solving for Q Using an R package Using software to solve the problem allows the use of the Colebrook equation in a straightforward format. The hydraulics package in R is applied to the same problem as above. ans <- hydraulics::darcyweisbach(D=0.5, hf=0.6, L=100, ks=0.000046, nu=1.023e-06, units = c('SI')) knitr::kable(format(as.data.frame(ans), digits = 3), format = "pipe") Q V L D hf f ks Re 0.406 2.07 100 0.5 0.6 0.0133 4.6e-05 1010392 The answer differs from the manual iteration by just over 2%, showing remarkable consistency. 4.6 Solving for pipe diameter, D (Type 3 problems) When D is unknown, neither Re nor relative roughness \\(\\frac{ks}{D}\\) are known. Referring to the Moody diagram, Figure 4.1, the difficulty in estimating a value for f (on the left axis) is evident since the positions on either the right axis (\\(\\frac{ks}{D}\\)) or x-axis (Re) are known. 4.6.1 Solving for D using manual iterations Solving for D using manual iterations is done by first rearranging Equation (4.9) to allow it to be solved for zero, as in Equation (4.10). \\[\\begin{equation} -2.221D^2\\sqrt{\\frac{gDh_f}{L}} \\log\\left(\\frac{\\frac{k_s}{D}}{3.7} + \\frac{1.784\\nu}{D}\\sqrt{\\frac{L}{gDh_f}}\\right)-Q=0 \\tag{4.10} \\end{equation}\\] Using this with manual iterations is demonstrated in Example 4.5. Example 4.5 For a similar problem to 4.4 use Q=0.416m3/s and solve for the required pipe diameter, D. This can be solved manually by guessing values and repeating the calculation in a spreadsheet or with a tool like R. Iteration 1: Guess an arbitrary value of D=0.3m. Solve the left side of Equation (4.10) to obtain a value of -0.31 Iteration 2: Guess another value for D=1.0m. The left side of Equation (4.10) produces a value for the function of 2.11 The root, when the function equals zero, lies between the two values, so the correct D is between 0.3 and 1.0. Repeated values can home in on a solution. Plotting the results from many trials can help guide toward the solution. The root is seen to lie very close to D=0.5 m. Repeated trials can home in on the result. 4.6.2 Solving for D using an equation solver An equation solver automatically accomplishes the manual steps of the prior demonstration. The equation from 1.6 can be written as a function that can then be solved for the root, again using R software for the demonstration: q_fcn <- function(D, Q, hf, L, ks, nu, g) { -2.221 * D^2 * sqrt(( g * D * hf)/L) * log10((ks/D)/3.7 + (1.784 * nu/D) * sqrt(L/(g * D * hf))) - Q } The uniroot function can solve the equation in R (or use a comparable approach in other software) for a reasonable range of D values ans <- uniroot(q_fcn, interval=c(0.01,4.0),Q=0.416, hf=0.6, L=100, ks=0.000046, nu=1.023053e-06, g=9.81)$root cat(sprintf("D = %.3f m\\n", ans)) #> D = 0.501 m 4.6.3 Solving for D using an R package The hydraulics R package implements an equation solving technique like that above to allow the direct solution of Type 3 problems. the prior example is solved using that package as shown beliow. ans <- hydraulics::darcyweisbach(Q=0.416, hf=0.6, L=100, ks=0.000046, nu=1.023e-06, ret_units = TRUE, units = c('SI')) knitr::kable(format(as.data.frame(ans), digits = 3), format = "pipe") ans Q 0.416 [m^3/s] V 2.11 [m/s] L 100 [m] D 0.501 [m] hf 0.6 [m] f 0.0133 [1] ks 4.6e-05 [m] Re 1032785 [1] 4.7 Parallel pipes: solving a system of equations In the examples above the challenge was often to solve a single implicit equation. The manual iteration approach can work to solve two equations, but as the number of equations increases, especially when using implicit equations, using an equation solver is needed. For the case of a simple pipe loop manual iterations are impractical. for this reason often fixed values of f are assumed, or an empirical energy loss equation is used. However, a single loop, identical to a parallel pipe problem, can be used to demonstrate how systems of equations can be solved simultaneously for systems of pipes. Example 4.6 demonstrates the process of assembling the equations for a solver for a parallel pipe problem. Example 4.6 Two pipes carry a flow of Q=0.5 m3/s, as depicted in Figure 4.2 Figure 4.2: Parallel Pipe Example The fundamental equations needed are the Darcy-Weisbach equation, the Colebrook equation, and continuity (conservation of mass). For the illustrated system, this means: The flows through each pipe must add to the total flow The head loss through Pipe 1 must equal that of Pipe 2 This could be set up as a system of anywhere from 2 to 10 equations to solve simultaneously. In this example four equations are used: \\[\\begin{equation} Q_1+Q_2-Q_{total}=V_1\\frac{\\pi}{4}D_1^2+V_2\\frac{\\pi}{4}D_2^2-0.5m^3/s=0 \\tag{4.11} \\end{equation}\\] and \\[\\begin{equation} Qh_{f1}-h_{f2} = \\frac{f_1L_1}{D_1}\\frac{V_1^2}{2g} -\\frac{f_2L_2}{D_2}\\frac{V_2^2}{2g}=0 \\tag{4.12} \\end{equation}\\] The other two equations are the Colebrook equation (4.8) for solving for the friction factor for each pipe. These four equations can be solved simultaneously using an equation solver, such as the fsolve function in the R package pracma. #assign known inputs - SI units Qsum <- 0.5 D1 <- 0.2 D2 <- 0.3 L1 <- 400 L2 <- 600 ks <- 0.000025 g <- 9.81 nu <- hydraulics::kvisc(T=100, units='SI') #Set up the function that sets up 4 unknowns (x) and 4 equations (y) F_trial <- function(x) { V1 <- x[1] V2 <- x[2] f1 <- x[3] f2 <- x[4] Re1 <- V1*D1/nu Re2 <- V2*D2/nu y <- numeric(length(x)) #Continuity - flows in each branch must add to total y[1] <- V1*pi/4*D1^2 + V2*pi/4*D2^2 - Qsum #Darcy-Weisbach equation for head loss - must be equal in each branch y[2] <- f1*L1*V1^2/(D1*2*g) - f2*L2*V2^2/(D2*2*g) #Colebrook equation for friction factors y[3] <- -2.0*log10((ks/D1)/3.7 + 2.51/(Re1*(f1^0.5)))-1/(f1^0.5) y[4] <- -2.0*log10((ks/D2)/3.7 + 2.51/(Re2*(f2^0.5)))-1/(f2^0.5) return(y) } #provide initial guesses for unknowns and run the fsolve command xstart <- c(2.0, 2.0, 0.01, 0.01) z <- pracma::fsolve(F_trial, xstart) #prepare some results to print Q1 <- z$x[1]*pi/4*D1^2 Q2 <- z$x[2]*pi/4*D2^2 hf1 <- z$x[3]*L1*z$x[1]^2/(D1*2*g) hf2 <- z$x[4]*L2*z$x[2]^2/(D2*2*g) cat(sprintf("Q1=%.2f, Q2=%.2f, V1=%.1f, V2=%.1f, hf1=%.1f, hf2=%.1f, f1=%.3f, f2=%.3f\\n", Q1,Q2,z$x[1],z$x[2],hf1,hf2,z$x[3],z$x[4])) #> Q1=0.15, Q2=0.35, V1=4.8, V2=5.0, hf1=30.0, hf2=30.0, f1=0.013, f2=0.012 If the fsolve command fails, a simple solution is sometimes to revise your initial guesses and try again. There are other solvers in R and every other scripting language that can be similarly implemented. If the simplification were applied for fixed f values, then Equations (4.11) and (4.12) can be solved simultaneously for V1 and V2. 4.8 Simple pipe networks: the Hardy-Cross method For water pipe networks containing multiple loops, manually setting up systems of equations is impractical. In addition, hand calculations always assume fixed f values or use an empirical friction loss equation to simplify calculations. A typical method to solve for the flow in each pipe segment in a small network uses the Hardy-Cross method. This consists of setting up an initial guess of flow (magnitude and direction) for each pipe segment, ensuring conservation of mass is preserved at each node (or vertex) in the network. Then calculations are performed for each loop, ensuring energy is conserved. When using the Darcy-Weisbach equation, Equation (4.3), for friction loss, the head loss in each pipe segment is usually expressed in a condensed form as \\({h_f = KQ^{2}}\\) where K is defined as in Equation (4.13). \\[\\begin{equation} K = \\frac{8fL}{\\pi^{2}gD^{5}} \\tag{4.13} \\end{equation}\\] When doing calculations by hand fixed f values are assumed, but when using a computational tool like R any of the methods for estimating f and hf may be applied. The Hardy-Cross method begins by assuming flows in each segment of a loop. These initial flows are then adjusted in a series of iterations. The flow adjustment in each loop is calculated at each iteration in Equation Equation (4.14). \\[\\begin{equation} \\Delta{Q_i} = -\\frac{\\sum_{j=1}^{p_i} K_{ij}Q_j|Q_j|}{\\sum_{j=1}^{p_i} 2K_{ij}Q_j^2} \\tag{4.14} \\end{equation}\\] For calculations for small systems with two or three loops can be done manually with fixed f and K values. Using the hydraulics R package to solve a small pipe network is demonstrated in Example 4.7. Example 4.7 Find the flows in each pipe in teh system shown in Figure 4.3. Input consists of pipe characteristics, pipe order and initial flows for each loop, as shown non the diagram. Figure 4.3: A sample pipe network with pipe numbers indicated in black Input for this system, assuming fixed f values, would look like the following. (If fixed K values are provided, f, L and D are not needed). These f values were estimated using \\(ks=0.00025 m\\) in the form of the Colebrook equation for fully rough flows, Equation (4.5). dfpipes <- data.frame( ID = c(1,2,3,4,5,6,7,8,9,10), #pipe ID D = c(0.3,0.2,0.2,0.2,0.2,0.15,0.25,0.15,0.15,0.25), #diameter in m L = c(250,100,125,125,100,100,125,100,100,125), #length in m f = c(.01879,.02075,.02075,.02075,.02075,.02233,.01964,.02233,.02233,.01964) ) loops <- list(c(1,2,3,4,5),c(4,6,7,8),c(3,9,10,6)) Qs <- list(c(.040,.040,.02,-.02,-.04),c(.02,0,0,-.02),c(-.02,.02,0,0)) Running the hardycross function and looking at the output after three iterations (defined by n_iter): ans <- hydraulics::hardycross(dfpipes = dfpipes, loops = loops, Qs = Qs, n_iter = 3, units = "SI") knitr::kable(ans$dfloops, digits = 4, format = "pipe", padding=0) loop pipe flow 1 1 0.0383 1 2 0.0383 1 3 0.0232 1 4 -0.0258 1 5 -0.0417 2 4 0.0258 2 6 0.0090 2 7 0.0041 2 8 -0.0159 3 3 -0.0232 3 9 0.0151 3 10 -0.0049 3 6 -0.0090 The output pipe data frame has added columns, including the flow (where direction is that for the first loop containing the segment). knitr::kable(ans$dfpipes, digits = 4, format = "pipe", padding=0) ID D L f Q K 1 0.30 250 0.0188 0.0383 159.7828 2 0.20 100 0.0208 0.0383 535.9666 3 0.20 125 0.0208 0.0232 669.9582 4 0.20 125 0.0208 -0.0258 669.9582 5 0.20 100 0.0208 -0.0417 535.9666 6 0.15 100 0.0223 0.0090 2430.5356 7 0.25 125 0.0196 0.0041 207.7883 8 0.15 100 0.0223 -0.0159 2430.5356 9 0.15 100 0.0223 0.0151 2430.5356 10 0.25 125 0.0196 -0.0049 207.7883 While the Hardy-Cross method is often used with fixed f (or K) values when it is used in exercises performed by hand, the use of the Colebrook equation allows friction losses to vary with Reynolds number. To use this approach the input data must include absolute roughness. Example values are included here: dfpipes <- data.frame( ID = c(1,2,3,4,5,6,7,8,9,10), #pipe ID D = c(0.3,0.2,0.2,0.2,0.2,0.15,0.25,0.15,0.15,0.25), #diameter in m L = c(250,100,125,125,100,100,125,100,100,125), #length in m ks = rep(0.00025,10) #absolute roughness, m ) loops <- list(c(1,2,3,4,5),c(4,6,7,8),c(3,9,10,6)) Qs <- list(c(.040,.040,.02,-.02,-.04),c(.02,0,0,-.02),c(-.02,.02,0,0)) The effect of allowing the calculation of f to be (correctly) dependent on velocity (via the Reynolds number) can be seen, though the effect on final flow values is small. ans <- hydraulics::hardycross(dfpipes = dfpipes, loops = loops, Qs = Qs, n_iter = 3, units = "SI") knitr::kable(ans$dfpipes, digits = 4, format = "pipe", padding=0) ID D L ks Q f K 1 0.30 250 3e-04 0.0382 0.0207 176.1877 2 0.20 100 3e-04 0.0382 0.0218 562.9732 3 0.20 125 3e-04 0.0230 0.0224 723.1119 4 0.20 125 3e-04 -0.0258 0.0222 718.1439 5 0.20 100 3e-04 -0.0418 0.0217 560.8321 6 0.15 100 3e-04 0.0088 0.0248 2700.4710 7 0.25 125 3e-04 0.0040 0.0280 296.3990 8 0.15 100 3e-04 -0.0160 0.0238 2590.2795 9 0.15 100 3e-04 0.0152 0.0239 2598.5553 10 0.25 125 3e-04 -0.0048 0.0270 285.4983 "],["flow-in-open-channels.html", "Chapter 5 Flow in open channels 5.1 An important dimensionless quantity 5.2 Equations for open channel flow 5.3 Trapezoidal channels 5.4 Circular Channels (flowing partially full) 5.5 Critical flow 5.6 Flow in Rectangular Channels 5.7 Gradually varied steady flow 5.8 Rapidly varied flow (the hydraulic jump)", " Chapter 5 Flow in open channels Where flowing water water is exposed to the atmosphere, and thus not under pressure, its condition is called open channel flow. Typical design challenges can be: Determining how deep water will flow in a channel Finding the bottom slope required to carry a defined flow in a channel Comparing different cross-sectional shapes and dimensions to carry flow In pipe flow the cross-sectional area does not change with flow rate, which simplifies some aspects of calculations. By contrast, in open channel flow conditions including flow depth, area, and roughness can all vary with flow rate, which tends to make the equations more cumbersome. In civil engineering applications, roughness characteristics are not usually considered as variable with flow rate. In what follows, three conditions for flow are considered: Uniform flow, where flow characteristics do not vary along the length of a channel Gradually varied flow, where flow responds to an obstruction or change in channel conditions with a gradual adjustment in flow depth Rapidly varied flow, where an abrupt channel transition results in a rapid change in water surface, the most important case of which is the hydraulic jump 5.1 An important dimensionless quantity For open channel flow, given a channel shape and flow rate, flow can usually exist at two different depths, termed subcritical (slow, deep) and supercritical (shallow, fast). The exception is at critical flow conditions, where only one depth exists, the critical depth. Which of these depths is exhibited by the flow is determined by the slope and roughness of the channel. The Froude number characterizes whether flow is critical, supercritical or subcritical, and is defined by Equation (5.1) \\[\\begin{equation} Fr=\\frac{V}{\\sqrt{gD}} \\tag{5.1} \\end{equation}\\] The Froude number characterizes flow as: Fr Condition Description <1.0 subcritical slow, deep =1.0 critical undulating, transitional >1.0 supercritical fast, shallow Critical flow is important in open-channel flow applications and is discussed further below. 5.2 Equations for open channel flow Flow conditions in an open channel under uniform flow conditions are often related by the Manning equation (5.2). \\[\\begin{equation} Q=A\\frac{C}{n}{R}^{\\frac{2}{3}}{S}^{\\frac{1}{2}} \\tag{5.2} \\end{equation}\\] In Equation (5.2), C is 1.0 for SI units and 1.49 for Eng (British Gravitational, English., or U.S. Customary) units. Q is the flow rate, A is the cross-sectional flow area, n is the Manning roughness coefficient, S is the longitudinal channel slope, and R is the hydraulic radius, defined by equation (5.3) \\[\\begin{equation} R=\\frac{A}{P} \\tag{5.3} \\end{equation}\\] where P is the wetted perimeter. Critical depth is defined by the relation (at critical conditions) in Equation (5.4) \\[\\begin{equation} \\frac{Q^{2}B}{g\\,A^{3}}=1 \\tag{5.4} \\end{equation}\\] where B is the width of the water surface (top width). Because of the channel geometry being included in A and R, it helps to work with specific shapes in adapting these equations. The two most common are trapezoidal and circular, included in Sections 5.3 and 5.4 below. As with pipe flow, the energy equation applies for one dimensional open channel flow as well, Equation (5.5): \\[\\begin{equation} \\frac{V_1^2}{2g}+y_1+z_1=\\frac{V_2^2}{2g}+y_2+z_2+h_L \\tag{5.5} \\end{equation}\\] where point 1 is upstream of point 2, V is the flow velocity, y is the flow depth, and z is the elevation of the channel bottom. \\(h_L\\) is the energy head loss from point 1 to point 2. For uniform flow, \\(h_L\\) is the drop in elevation between the two points due to the channel slope. 5.3 Trapezoidal channels In engineering applications one of the most common channel shapes is trapezoidal. Figure 5.1: Typical symmetrical trapezoidal cross section The geometrical relationships for a trapezoid are: \\[\\begin{equation} A=(b+my)y \\tag{5.6} \\end{equation}\\] \\[\\begin{equation} P=b+2y\\sqrt{1+m^2} \\tag{5.7} \\end{equation}\\] Combining Equations (5.6) and (5.7) yields: \\[\\begin{equation} R=\\frac{A}{P}=\\frac{\\left(b+my\\right)y}{b+2y\\sqrt{1+m^2}} \\tag{5.8} \\end{equation}\\] Top width: \\(B=b+2\\,m\\,y\\). Substituting Equations (5.6) and (5.8) into the Manning equation produces Equation (5.9). \\[\\begin{equation} Q=\\frac{C}{n}{\\frac{\\left(by+my^2\\right)^{\\frac{5}{3}}}{\\left(b+2y\\sqrt{1+m^2}\\right)^\\frac{2}{3}}}{S}^{\\frac{1}{2}} \\tag{5.9} \\end{equation}\\] 5.3.1 Solving the Manning equation in R To solve Equation (5.9) when any variable other than Q is unknown, it is straightforward to rearrange it to a form of y(x) = 0. \\[\\begin{equation} Q-\\frac{C}{n}{\\frac{\\left(by+my^2\\right)^{\\frac{5}{3}}}{\\left(b+2y\\sqrt{1+m^2}\\right)^\\frac{2}{3}}}{S}^{\\frac{1}{2}}=0 \\tag{5.10} \\end{equation}\\] This allows the use of a standard solver to find the root(s). If solving it by hand, trial and error can be employed as well. Example 5.1 demonstrates the solution of Equation (5.10) for the flow depth, y. A trial-and-error approach can be applied, and with careful selection of guesses a solution can be obtained relatively quickly. Using solvers makes the process much quicker and less prone to error. Example 5.1 Find the flow depth, y, for a trapezoidal channel with Q=225 ft3/s, n=0.016, m=2, b=10 ft, S=0.0006. The Manning equation can be set up as a function in terms of a missing variable, here using normal depth, y as the missing variable. yfun <- function(y) { Q - (((y * (b + m * y)) ^ (5 / 3) * sqrt(S)) * (C / n) / ((b + 2 * y * sqrt(1 + m ^ 2)) ^ (2 / 3))) } Because these use US Customary (or English) units, C=1.486. Define all of the needed input variables for the function. Q <- 225. n <- 0.016 m <- 2 b <- 10.0 S <- 0.0006 C <- 1.486 Use the R function uniroot to find a single root within a defined interval. Set the interval (the range of possible y values in which to search for a root) to cover all plausible values, here from 0.0001 mm to 200 m. ans <- uniroot(yfun, interval = c(0.0000001, 200), extendInt = "yes") cat(sprintf("Normal Depth: %.3f ft\\n", ans$root)) #> Normal Depth: 3.406 ft Functions can usually be given multiple values as input, returning the corresponding values of output. this allows plots to be created to show, for example, how the left side of Equation (5.10) varies with different values of depth, y. ys <- seq(0.1, 5, 0.1) plot(ys,yfun(ys), type='l', xlab = "y, ft", ylab = "Function to solve for zero") abline(h=0) grid() Figure 5.2: Variation of the left side of Equation (5.10) with y for Example 5.1. This validates the result in the example, showing the root of Equation (5.10), when the function has a value of 0, occurs for a depth, y of a little less than 3.5. 5.3.2 Solving the Manning equation with the hydraulics R package The hydraulics package has a manningt (the ‘t’ is for ‘trapezoid’) function for trapezoidal channels. Example 5.2 demonstrates its usage. Example 5.2 Find the uniform (normal) flow depth, y, for a trapezoidal channel with Q=225 ft3/s, n=0.016, m=2, b=10 ft, S=0.0006. Specifying “Eng” units ensures the correct C value is used. Sf is the same as S in Equations (5.2) and (5.9) since flow is uniform. ans <- hydraulics::manningt(Q = 225., n = 0.016, m = 2, b = 10., Sf = 0.0006, units = "Eng") cat(sprintf("Normal Depth: %.3f ft\\n", ans$y)) #> Normal Depth: 3.406 ft #critical depth is also returned, along with other variables. cat(sprintf("Critical Depth: %.3f ft\\n", ans$yc)) #> Critical Depth: 2.154 ft 5.3.3 Solving the Manning equation using a spreadsheet like Excel Spreadsheet software is very popular and has been modified to be able to accomplish many technical tasks such as solving equations. This example uses Excel with its solver add-in activated, though other spreadsheet software has similar solver add-ins that can be used. The first step is to enter the input data, for the same example as above, along with an initial guess for the variable you wish to solve for. The equation for which a root will be determined is typed in using the initial guess for y in this case. At this point you could use a trial-and-error approach and simply try different values for y until the equation produces something close to 0. A more efficient method is to use a solver. Check that the solver add-in is activated (in Options) and open it. Set the values appropriately. Click Solve and the y value that produces a zero for the equation will appear. If you need to solve for multiple roots, you will need to start from different initial guesses. 5.3.4 Optimal trapezoidal geometry Most fluid mechanics texts that include open channel flow include a derivation of optimal geometry for a trapezoidal channel. This is also called the most efficient cross section. What this means is for a given A and m, there is an optimal flow depth and bottom width for the channel, defined by Equations (5.11) and (5.12). \\[\\begin{equation} b_{opt}=2y\\left(\\sqrt{1+m^2}-m\\right) \\tag{5.11} \\end{equation}\\] \\[\\begin{equation} y_{opt}=\\sqrt{\\frac{A}{2\\sqrt{1+m^2}-m}} \\tag{5.12} \\end{equation}\\] These may be calculated manually, but they are also returned by the manningt function of the hydraulics package in R. Example 5.3 demonstrates this. Example 5.3 Find the optimal channel width to transmit 360 ft3/s at a depth of 3 ft with n=0.015, m=1, S=0.0006. ans <- hydraulics::manningt(Q = 360., n = 0.015, m = 1, y = 3.0, Sf = 0.00088, units = "Eng") knitr::kable(format(as.data.frame(ans), digits = 2), format = "pipe", padding=0) Q V A P R y b m Sf B n yc Fr Re bopt 360 5.3 68 28 2.4 3 20 1 0.00088 26 0.015 2.1 0.57 1159705 4.8 cat(sprintf("Optimal bottom width: %.5f ft\\n", ans$bopt)) #> Optimal bottom width: 4.76753 ft The results show that, aside from the rounding, the required width is approximately 20 ft, and the optimal bottom width for optimal hydraulic efficiency would be 4.76 ft. To check the depth that would be associated with a channel of the optimal width, substitute the optimal width for b and solve for y: ans <- hydraulics::manningt(Q = 360., n = 0.015, m = 1, b = 4.767534, Sf = 0.00088, units = "Eng") cat(sprintf("Optimal depth: %.5f ft\\n", ans$yopt)) #> Optimal depth: 5.75492 ft 5.4 Circular Channels (flowing partially full) Civil engineers encounter many situations with circular pipes that are flowing only partially full, such as storm and sanitary sewers. Figure 5.3: Typical circular cross section The relationships between the depth of water and the values needed in the Manning equation are: Depth (or fractional depth as written here) is described by Equation (5.13) \\[\\begin{equation} \\frac{y}{D}=\\frac{1}{2}\\left(1-\\cos{\\frac{\\theta}{2}}\\right) \\tag{5.13} \\end{equation}\\] Area is described by Equation (5.14) \\[\\begin{equation} A=\\left(\\frac{\\theta-\\sin{\\theta}}{8}\\right)D^2 \\tag{5.14} \\end{equation}\\] (Be sure to use theta in radians.) Wetted perimeter is described by Equation (5.15) \\[\\begin{equation} P=\\frac{D\\theta}{2} \\tag{5.15} \\end{equation}\\] Combining Equations (5.14) and (5.15): \\[\\begin{equation} R=\\frac{D}{4}\\left(1-\\frac{\\sin{\\theta}}{\\theta}\\right) \\tag{5.16} \\end{equation}\\] Top width: \\(B=D\\,sin{\\frac{\\theta}{2}}\\) Substituting Equations (5.14) and (5.16) into the Manning equation, Equation (5.2), produces (5.17). \\[\\begin{equation} \\theta^{-\\frac{2}{3}}\\left(\\theta-\\sin{\\theta}\\right)^\\frac{5}{3}-CnQD^{-\\frac{8}{3}}S^{-\\frac{1}{2}}=0 \\tag{5.17} \\end{equation}\\] where C=20.16 for SI units and C=13.53 for US Customary (English) units. 5.4.1 Solving the Manning equation for a circular pipe in R As was demonstrated with pipe flow, a function could be written with Equation (5.17) and a solver applied to find the value of \\(\\theta\\) for the given flow conditions with a known D, S, n and Q. The value for \\(\\theta\\) could then be used with Equations (5.13), (5.14) and (5.15) to recover geometric values. Hydraulic analysis of circular pipes flowing partially full often account for the value of Manning’s n varying with depth (Camp, 1946); some standards recommend fixed n values, and others require the use of a depth-varying n. The R package hydraulics has implemented those routines to enable these calculations, including using a fixed n (the default) or a depth-varing n. For an existing pipe, a common problem is the determination of the depth, y that a given flow Q, will have given a pipe diameter d, slope S and roughness n. Example 5.4 demonstrates this. Example 5.4 Find the uniform (normal) flow depth, y, for a trapezoidal channel with Q=225 ft3/s, n=0.016, m=2, b=10 ft, S=0.0006. Do this assuming both that Manning n is constant with depth and that it varies with depth. The function manningc from the hydraulics package is used. Any one of the variables in the Manning equation, and related geometric variables, may be treated as an unknown. ans <- hydraulics::manningc(Q=0.01, n=0.013, Sf=0.001, d = 0.2, units="SI", ret_units = TRUE) ans2 <- hydraulics::manningc(Q=0.01, n=0.013, Sf=0.001, d = 0.2, n_var = TRUE, units="SI", ret_units = TRUE) df <- data.frame(Constant_n = unlist(ans), Variable_n = unlist(ans2)) knitr::kable(df, format = "html", digits=3, padding = 0, col.names = c("Constant n","Variable n")) |> kableExtra::kable_styling(full_width = F) Constant n Variable n Q 0.010 0.010 V 0.376 0.344 A 0.027 0.029 P 0.437 0.482 R 0.061 0.060 y 0.158 0.174 d 0.200 0.200 Sf 0.001 0.001 n 0.013 0.014 yc 0.085 0.085 Fr 0.297 0.235 Re 22342.979 20270.210 Qf 0.010 0.010 It is also sometimes convenient to see a cross-section diagram. hydraulics::xc_circle(y = ans$y, d=ans$d, units = "SI") 5.5 Critical flow Critical flow in open channel flow is described in general by Equation (5.4). For any channel geometry and flow rate a convenient plot is a specific energy diagram, which illustrates the different flow depths that can occur for any given specific energy. Specific energy is defined by Equation (5.18). \\[\\begin{equation} E=y+\\frac{V^2}{2g} \\tag{5.18} \\end{equation}\\] It can be interpreted as the total energy head, or energy per unit weight, relative to the channel bottom. For a trapezoidal channel, critical flow conditions occur as described by Equation (5.4). Combining that with trapezoidal geometry produces Equation (5.19) \\[\\begin{equation} \\frac{Q^2}{g}=\\frac{\\left(by_c+m{y_c}^2\\right)^3}{b+2my_c} \\tag{5.19} \\end{equation}\\] where \\(y_c\\) indicates critical flow depth. This is important for understanding what may happen to the water surface when flow encounters an obstacle or transition. For the channel of Example 5.3, the diagram is shown in Figure 5.4. hydraulics::spec_energy_trap( Q = 360, b = 20, m = 1, scale = 4, units = "Eng" ) Figure 5.4: A specific energy diagram for the conditions of Example 5.3. This provides an illustration that for y=3 ft the flow is subcritical (above the critical depth). Specific energy for the conditions of the prior example is \\[E=y+\\frac{V^2}{2g}=3.0+\\frac{5.22^2}{2*32.2}=3.42 ft\\] If the channel bottom had an abrupt rise of \\(E-E_c=3.42-3.03=0.39 ft\\) critical depth would occur over the hump. A rise of anything greater than that would cause damming to occur. Once flow over a hump is critical, downstream of the hump the flow will be in supercritical conditions, flowing at the alternate depth. The specific energy for a given depth y and alternate depth can be added to the plot by including an argument for depth, y, as in Figure 5.5. hydraulics::spec_energy_trap( Q = 360, b = 20, m = 1, scale = 4, y=3.0, units = "Eng" ) Figure 5.5: A specific energy diagram for the conditions of Example 5.3 with an additional y value added. 5.6 Flow in Rectangular Channels When working with rectangular channels the open channel equations simplify, because flow, \\(Q\\), can be expressed as flow per unit width, \\(q = Q/b\\), where \\(b\\) is the channel width. Since \\(Q/A=V\\) and \\(A=by\\), Equation (5.18) can be written as Equation (5.20): \\[\\begin{equation} E=y+\\frac{Q^2}{2gA^2}=y+\\frac{q^2}{2gy^2} \\tag{5.20} \\end{equation}\\] Equation (5.19) for critical depth, \\(y_c\\), also is simplified for rectangular channels to Equation (5.21): \\[\\begin{equation} y_c = \\left({\\frac{q^2}{g}}\\right)^{1/3} \\tag{5.21} \\end{equation}\\] Combining Equation (5.20) and Equation (5.21) shows that at critical conditions, the minimum specific energy is: \\[\\begin{equation} E_{min} = \\frac{3}{2} y_c \\tag{5.22} \\end{equation}\\] Example 5.5, based on an exercise from the open-channel flow text by Sturm (Sturm, 2021), demonstrates how to solve for the depth through a rectangular section when the bottom height changes. Example 5.5 A 0.5 m wide rectangular channel carries a flow of 2.2 m\\(^3\\)/s at a depth of 2 m (\\(y_1\\)=2m). If the channel bottom rises 0.25 m (\\(\\Delta z=0.25~ m\\)), and head loss, \\(h_L\\) over the transition is negligible, what is the depth, \\(y_2\\) after the rise in channel bottom? Figure 5.6: The rectangular channel of Example 5.5 with an increase in channel bottom height downstream. A specific energy diagram is very helpful for establishing upstream conditions and estimating \\(y_2\\). p1 <- hydraulics::spec_energy_trap( Q = 2.2, b = 0.5, m = 0, y = 2, scale = 2.5, units = "SI" ) p1 Figure 5.7: A specific energy diagram for the conditions of Example 5.5. The values of \\(y_c\\) and \\(E_{min}\\) shown in the plot can be verified using Equations (5.21) and (5.22). This should always be checked to describe the incoming flow and what will happen as flow passes over a hump. Since \\(y_1\\) > \\(y_c\\) the upstream flow is subcritical, and flow can be expected to drop as it passes over the hump. Upstream and downstream specific energy are related by Equation (5.23): \\[\\begin{equation} E_1-E_2=\\Delta z + h_L \\tag{5.23} \\end{equation}\\] Since \\(h_L\\) is negligible in this example, the downstream specific energy, \\(E_2\\) is lower that the upper \\(E_1\\) by an amount \\(\\Delta z\\), or \\[\\begin{equation} E_2 = E_1 - \\Delta z \\tag{5.24} \\end{equation}\\] For a 0.25 m rise, and using \\(q = Q/b = 2.2/0.5 = 4.4\\), combining Equation (5.24) and Equation (5.20): \\[E_2 = E_1 - 0.25 = 2 + \\frac{4.4^2}{2(9.81)(2^2)} - 0.25 = 2.247 - 0.25 = 1.997 ~m\\] From the specific energy diagram, for \\(E_2=1.997 ~ m\\) a depth of about \\(y_2 \\approx 1.6 ~ m\\) would be expected, and the flow would continue in subcritical conditions. The value of \\(y_2\\) can be calculated using Equation (5.20): \\[1.997 = y_2 + \\frac{4.4^2}{2(9.81)(y_2^2)}\\] which can be rearranged to \\[0.9967 - 1.997 y_2^2 + y_2^3= 0\\] Solving a polynomial in R is straightforward using the polyroot function and using Re to extract the real portion of the solution (after filtering for non-imaginary solutions). all_roots <- polyroot(c(0.9667, 0, -1.997, 1)) Re(all_roots)[abs(Im(all_roots)) < 1e-6] #> [1] 0.9703764 -0.6090519 1.6356755 The negative root is meaningless, the lower positive root is the supercritical depth for \\(E_2 = 1.997 ~ m\\), and the larger positive root is the subcritical depth. Thus the correct solution is \\(y_2 = 1.64 ~ m\\) when the channel bottom rises by 0.25 m. A vertical line or other annotation can be added to the specific energy diagram to indicate \\(E_2\\) using ggplot2 with a command like p1 + ggplot2::geom_vline(xintercept = 1.997, linetype=3). The hydraulics R package can also add lines to a specific energy diagram for up to two depths: p2 <- hydraulics::spec_energy_trap(Q = 2.2, b = 0.5, m = 0, y = c(2, 1.64), scale = 2.5, units = "SI") p2 Figure 5.8: A specific energy diagram for the conditions of Example 5.5 with added annotation for when the bottom elecation rises. The specific energy diagram shows that if \\(\\Delta z > E_1 - E_{min}\\), the downstream specific energy, \\(E_2\\) would be to the left of the curve, so no feasible solution would exist. At that point damming would occur, raising the upstream depth, \\(y_1\\), and thus increasing \\(E_1\\) until \\(E_2 = E_{min}\\). The largest rise in channel bottom height that will not cause damming is called the critical hump height: \\(\\Delta z_{c} = E_1 - E_{min}\\). 5.7 Gradually varied steady flow When water approaches an obstacle, it can back up, with its depth increasing. The effect can be observed well upstream. Similarly, as water approaches a drop, such as with a waterfall, the water level declines, and that effect can also be seen upstream. In general, any change in slope or roughness will produce changes in depth along a channel length. There are three depths that are important to define for a channel: \\(y_c\\), critical depth, found using Equation (5.4) \\(y_0\\), normal depth, found using Equation (5.2) \\(y\\), flow depth, found using Equation (5.5) If \\(y_n < y_c\\) flow is supercritical (for example, flowing down a steep slope); if \\(y_n > y_c\\) flow is subcritical. Variations in the water surface are classified by profile types based on to whether the normal flow is subcritical (or mild sloped, M) or supercritical (steep, S), as in Figure 5.9 (Davidian, Jacob, 1984). Figure 5.9: Types of flow profiles on mild and steep slopes In addition to channel transitions, changes in channel slow of roughness (Manning n) will cause the flow surface to vary. Some of these conditions are illustrated in Figure 5.10 (Davidian, Jacob, 1984). Figure 5.10: Types of flow profiles with changes in slope or roughness Typically, for supercritical flow the calculations start at an upstream cross section and move downstream. For subcritical flow calculations proceed upstream. However, for the direct step method, a negative result will indicate upstream, and a positive result indicates downstream. If the water surface passes through critical depth (from supercritical to subcritical or the reverse) it is no longer gradually varied flow and the methods in this section do not apply. This can happen at abrupt changes in channel slope or roughness, or channel transitions. 5.7.1 The direct step method The direct step method looks at two cross sections in a channel where depths, \\(y_1\\) and \\(y_2\\) are defined. Figure 5.11: A gradually varied flow example. The distance between these two cross-sections, \\({\\Delta}X\\), is calculated using Equation (5.25) \\[\\begin{equation} {\\Delta}X=\\frac{E_1-E_2}{\\overline{S}-S_0} \\tag{5.25} \\end{equation}\\] Where E is the specific energy from Equation (5.18), \\(S_0\\) is the slope of the channel bed, and \\(S\\) is the slope of the energy grade line. \\(\\overline{S}\\) is the average of the S values at each cross section calculated using the Manning equation, Equation (5.2) solved for slope, as in Equation (5.26). \\[\\begin{equation} S=\\frac{n^2\\,V^2}{C^2\\,R^{\\frac{4}{3}}} \\tag{5.26} \\end{equation}\\] Example 5.6 demonstrates this. Example 5.6 Water flows at 10 m3/s in a trapezoidal channel with n=0.015, bottom width 3 m, side slope of 2:1 (H:V) and longitudinal slope 0.0009 (0.09%). At the location of a USGS stream gage the flow depth is 1.4 m. Use the direct step method to find the distance to the point where the depth is 1.2 m and determine whether it is upstream or downstream. Begin by setting up a function to calculate the Manning slope and setting up the input data. #function to calculate Manning slope slope_f <- function(V,n,R,C) { return(V^2*n^2/(C^2*R^(4./3.))) } #Now set up input data ################################## #input Flow Q=10.0 #input depths: y1 <- 1.4 #starting depth y2 <- 1.2 #final depth #Define the number of steps into which the difference in y will be broken nsteps <- 2 #channel geometry: bottom_width <- 3 side_slope <- 2 #side slope is H:V. Use zero for rectangular manning_n <- 0.015 long_slope <- 0.0009 units <- "SI" #"SI" or "Eng" if (units == "SI") { C <- 1 #Manning constant: 1 for SI, 1.49 for US units g <- 9.81 } else { #"Eng" means English, or US system C <- 1.49 g <- 32.2 } #find depth increment for each step, depths at which to solve depth_incr <- (y2 - y1) / nsteps depths <- seq(from=y1, to=y2, by=depth_incr) First check to see if the flow is subcritical or supercritical and find the normal depth. Critical and normal depths can be calculated using the manningt function in the hydraulics package, as in Example 5.2. However, because other functionality of the rivr package is used, these will be calculated using functions from the rivr package. rivr::critical_depth(Q = Q, yopt = y1, g = g, B = bottom_width , SS = side_slope) #> [1] 0.8555011 #note using either depth for yopt produces the same answer rivr::normal_depth(So = long_slope, n = manning_n, Q = Q, yopt = y1, Cm = C, B = bottom_width , SS = side_slope) #> [1] 1.147137 The normal depth is greater than the critical depth, so the channel has a mild slope. The beginning and ending depths are above normal depth. This indicates the profile type, following Figure 5.9, is M-1, so the flow depth should decrease in depth going upstream. This also verifies that the flow depth between these two points does not pass through critical flow, so is a valid gradually varied flow problem. For each increment the \\({\\Delta}X\\) value needs to be calculated, and they need to be accumulated to find the total length, L, between the two defined depths. #loop through each channel segment (step), calculating the length for each segment. #The channel_geom function from the rivr package is helpful L <- 0 for ( i in 1:nsteps ) { #find hydraulic geometry, E and Sf at first depth xc1 <- rivr::channel_geom(y=depths[i], B=bottom_width, SS=side_slope) V1 <- Q/xc1[['A']] R1 <- xc1[['R']] E1 <- depths[i] + V1^2/(2*g) Sf1 <- slope_f(V1,manning_n,R1,C) #find hydraulic geometry, E and Sf at second depth xc2 <- rivr::channel_geom(y=depths[i+1], B=bottom_width, SS=side_slope) V2 <- Q/xc2[['A']] R2 <- xc2[['R']] E2 <- depths[i+1] + V2^2/(2*g) Sf2 <- slope_f(V2,manning_n,R2,C) Sf_avg <- (Sf1 + Sf2) / 2.0 dX <- (E1 - E2) / (Sf_avg - long_slope) L <- L + dX } cat(sprintf("Using %d steps, total distance from depth %.2f to %.2f = %.2f m\\n", nsteps, y1, y2, L)) #> Using 2 steps, total distance from depth 1.40 to 1.20 = -491.75 m The result is negative, verifying that the location of depth y2 is upstream of y1. Of course, the result will become more precise as more incremental steps are included, as shown in Figure 5.12 Figure 5.12: Variation of number of calculation steps to final calculated distance. The direct step method is also implemented in the hydraulics package, and can be applied to the same problem as above, as illustrated in Example 5.7. Example 5.7 Water flows at 10 m3/s in a trapezoidal channel with n=0.015, bottom width 3 m, side slope of 2:1 (H:V) and longitudinal slope 0.0009 (0.09%). At the location of a USGS stream gage the flow depth is 1.4 m. Use the direct step method to find the distance to the point where the depth is 1.2 m and determine whether it is upstream or downstream. hydraulics::direct_step(So=0.0009, n=0.015, Q=10, y1=1.4, y2=1.2, b=3, m=2, nsteps=2, units="SI") #> y1=1.400, y2=1.200, yn=1.147, yc=0.855585 #> Profile type = M1 #> # A tibble: 3 × 7 #> x z y A Sf E Fr #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 0 0 1.4 8.12 0.000407 1.48 0.405 #> 2 -192. 0.173 1.3 7.28 0.000548 1.40 0.466 #> 3 -492. 0.443 1.2 6.48 0.000753 1.32 0.541 This produces the same result, and verifies that the water surface profile is type M-1. 5.7.2 Standard step method The standard step method works similarly to the direct step method, except from one known depth the second depth is determined at a known distance, L. This is a preferred method when the depth at a critical location, such as a bridge, is needed. The rivr package implements the standard step method in its compute_profile function. To compare it to the direct step method, check the depth at \\(y_2\\) given the total distance from Example 5.6. Example 5.8 For the same channel and flow rate as Example 5.6, determine the depth of water at the distance L determined above. The function requires the distance to be positive, so apply the absolute value to the L value. dist = abs(L) ans <- rivr::compute_profile(So = long_slope, n = manning_n, Q = Q, y0 = y1, Cm = C, g = g, B = bottom_width, SS = side_slope, stepdist = dist/nsteps, totaldist = dist) #Distances along the channel where depths were determined ans$x #> [1] 0.0000 -245.8742 -491.7483 #Depths at each distance ans$y #> [1] 1.400000 1.277009 1.200592 This shows the distances and depths at each of the steps defined. Consistent with the above, the distances are negative, showing that they are progressing upstream. The results are identical for \\(y_2\\) using the direct step method. 5.8 Rapidly varied flow (the hydraulic jump) Figure 5.13: A hydraulic jump at St. Anthony Falls, Minnesota. In the discussion of critical flow in Section 5.5, the concept of alternate depths was introduced, where a given flow rate in a channel with known geometry typically may assume two possible values, one subcritical and one supercritical. For the case of supercritical flow transitioning to subcritical flow, a smooth transition is impossible, so a hydraulic jump occurs. A hydraulic jump always dissipates some of the incoming energy. A hydraulic jump is depicted in Figure 5.14 (Peterka, Alvin J., 1978). Figure 5.14: A typical hydraulic jump. 5.8.1 Sequent (or conjugate) depths The two depths on either side of a hydraulic jump are called sequent depths or conjugate depths. The relationship between them can be established using the momentum equation to develop an general expression (for any open channel) for the momentum function, M, as in Equation (5.27). \\[\\begin{equation} M=Ah_c+\\frac{Q^2}{gA} \\tag{5.27} \\end{equation}\\] where \\(h_c\\) is the distance from the water surface to the centroid of the channel cross-section. For a trapezoidal channel, the momentum equation becomes that described by Equation (5.28). \\[\\begin{equation} M=\\frac{by^2}{2}+\\frac{my^3}{3}+\\frac{Q^2}{gy\\left(b+my\\right)} \\tag{5.28} \\end{equation}\\] For the case of a rectangular channel, setting m=0 and setting the Momentum function for two sequent depths, y1 ans y2 equal, produces the relationship in Equation (5.29). \\[\\begin{equation} \\frac{y_2}{y_1}=\\frac{1}{2}\\left(-1+\\sqrt{1+8Fr_1^2}\\right) or \\frac{y_1}{y_2}=\\frac{1}{2}\\left(-1+\\sqrt{1+8Fr_2^2}\\right) \\tag{5.29} \\end{equation}\\] where Frn is the Froude Number [Equation (5.1)] at section n. Again, for the case of a rectangular channel, the energy head loss through a hydraulic jump simplifies to Equation (5.30). \\[\\begin{equation} h_l=\\frac{\\left(y_2-y_1\\right)^3}{4y_1y_2} \\tag{5.30} \\end{equation}\\] Given that the momentum function must be conserved on either side of a hydraulic jump, finding the sequent depth for any known depth becomes straightforward for trapezoidal shapes. Setting M1 = M2 in Equation (5.28) allows the use of a solver, as in Example 5.9. Example 5.9 A trapezoidal channel with a bottom width of 0.5 m and a side slope of 1:1 carries a flow of 0.2 m3/s. The depth on one side of a hydraulic jump is 0.1 m. Find the sequent depth, the energy head loss, and the power dissipation in Watts. flow <- 0.2 ans <- hydraulics::sequent_depth(Q=flow,b=0.5,y=0.1,m=1,units = "SI", ret_units = TRUE) #print output of function as.data.frame(ans) #> ans #> y 0.1 [m] #> y_seq 0.3941009 [m] #> yc 0.217704 [m] #> Fr 3.635731 [1] #> Fr_seq 0.3465538 [1] #> E 0.666509 [m] #> E_seq 0.4105265 [m] #Find energy head loss hl <- abs(ans$E - ans$E_seq) hl #> 0.2559825 [m] #Express this as a power loss gamma <- hydraulics::specwt(units = "SI") P <- gamma*flow*hl cat(sprintf("Power loss = %.1f Watts\\n",P)) #> Power loss = 501.4 Watts The energy loss across hydraulic jumps varies with the Froude number of the incoming flow, as shown in depicted in Figure 5.15 (Peterka, Alvin J., 1978). Figure 5.15: Types of hydraulic jumps. 5.8.2 Location of a hydraulic jump In hydraulic infrastructure where hydraulic jumps will occur there are usually engineered features, such as baffles or basins, to force a hydraulic jump to occur in specific locations, to protect downstream waterways from the turbulent effects of an uncontrolled hydraulic jump. In the absence of engineered features to cause a jump, the location of a hydraulic jump can be determined using the concepts of Sections 5.7 and 5.8. Example 5.10 demonstrates the determination of the location of a hydraulic jump when normal flow conditions exist at some distance upstream and downstream of the jump. Example 5.10 A rectangular (a trapezoid with side slope, m=0) concrete channel with a bottom width of 3 m carries a flow of 8 m3/s. The upstream channel slopes steeply at So=0.018 and discharges onto a mild slope of So=0.0015. Determine the height of the jump and its location. First find the normal depth on each slope, and the critical depth for the channel. yn1 <- hydraulics::manningt(Q = 8, n = 0.013, m = 0, Sf = 0.018, b = 3, units = "SI")$y yn2 <- hydraulics::manningt(Q = 8, n = 0.013, m = 0, Sf = 0.0015, b = 3, units = "SI")$y yc <- hydraulics::manningt(Q = 8, n = 0.013, m = 0, Sf = 0.0015, b = 3, units = "SI")$yc cat(sprintf("yn1 = %.3f m, yn2 = %.3f m, yc = %.3f m\\n", yn1, yn2, yc)) #> yn1 = 0.498 m, yn2 = 1.180 m, yc = 0.898 m Recall that the calculation of yc only depends on flow and channel geometry (Q, b, m), so the values of n and Sf can be arbitrary for that command. These results confirm that flow is supercritical upstream and subcritical downstream, so a hydraulic jump will occur. The hydraulic jump will either begin at yn1 (and jump to the sequent depth for yn1) or end at yn2 (beginning at the sequent depth for yn2). The possibilities are shown in Figure 5.9 in the lower right panel. First check the two sequent depths. yn1_seq <- hydraulics::sequent_depth(Q = 8, b = 3, y=yn1, m = 0, units = "SI")$y_seq yn2_seq <- hydraulics::sequent_depth(Q = 8, b = 3, y=yn2, m = 0, units = "SI")$y_seq cat(sprintf("yn1_seq = %.3f m, yn2_seq = %.3f m\\n", yn1_seq, yn2_seq)) #> yn1_seq = 1.476 m, yn2_seq = 0.666 m This confirms that if the jump began at yn1 (on the steep slope) it would need to jump a level below yn2, with an S-1 curve providing the gradual increase in depth to yn2. Since yn1_seq exceeds yn2, this is not possible. That can be verified using the direct_step function to show the distance from yn1_seq to yn2 would need to be upstream (negative x values in the result), which cannot occur for this case. This means the alternate case must exist, with an M-3 profile raising yn1 to yn2_seq at which point the jump occurs. The direct step method can find this distance along the channel. hydraulics::direct_step(So=0.0015, n=0.013, Q=8, y1=yn1, y2=yn2_seq, b=3, m=0, nsteps=2, units="SI") #> # A tibble: 3 × 7 #> x z y A Sf E Fr #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 0 0 0.498 1.49 0.0180 1.96 2.42 #> 2 23.4 -0.0350 0.582 1.75 0.0113 1.65 1.92 #> 3 44.6 -0.0669 0.666 2.00 0.00761 1.48 1.57 The number of calculation steps (nsteps) can be increased for greater precision, but 2 steps is adequate here. "],["momentum-in-water-flow.html", "Chapter 6 Momentum in water flow 6.1 Equations of linear momentum 6.2 The momentum equation in pipe design", " Chapter 6 Momentum in water flow When moving water changes direction or velocity, an external force must be associated with the change. In civil engineering infrastructure this is ubiquitous and the forces associated with this must be accounted for in design. Figure 6.1: Water pipe on Capitol Hill, Seattle. 6.1 Equations of linear momentum Newton’s law relates the forces applied to a body to the rate of change of linear momentum, as in Equation (6.1) \\[\\begin{equation} \\sum{\\overrightarrow{F}}=\\frac{d\\left(m\\overrightarrow{V}\\right)}{dt} \\tag{6.1} \\end{equation}\\] For fluid flow in a hydraulic system carrying a flow Q, the equation can be written in any linear direction (x-direction in this example) as in Equation (6.2). \\[\\begin{equation} \\sum{F_x}=\\rho{Q}\\left(V_{2x}-V_{1x}\\right) \\tag{6.2} \\end{equation}\\] where \\(\\rho{Q}\\) is the mass flux through the system, \\(V_{1x}\\) is the velocity in the x-direction where flow enters the system, and \\(V_{2x}\\) is the velocity in the x-direction where flow leaves the system. \\(\\sum{F_x}\\) is the vector sum of all external forces acting on the system in the x-direction. It should be noted that the values of V are the average cross-sectional velocity. A momentum correction factor (\\(\\beta\\)), can be applied when the velocity is highly non-uniform across the cross-section. In nearly all civil engineering applications the adjustment factor is close enough to 1 where it is ignored in the calculations. 6.2 The momentum equation in pipe design One of the most common civil engineering applications of the momentum equation is providing the lateral restraint where a pipe bend occurs. One approach to provide the external force to keep the pipe in equilibrium is to use a thrust block, as illustrated in Figure 6.2 (Ductile Iron Pipe Research Association, 2016). Figure 6.2: A sketch of a pipe bend with a thrust block. Example 6.1 A horizontal 18-inch diameter pipe carries flow Q of water at 68\\(^\\circ\\)F with a pressure of 60 psi and encounters a bend of angle \\(\\theta=30^\\circ\\). Show how the reaction force, R varies with the flow rate through the bend for flows up to 20 ft3/s. Ignore head loss through the bend. Taking the control volume to be the bend, the external forces acting on the bend are shown in Figure 6.3. Figure 6.3: External forces on the pipe. Note that if the pipe were not horizontal, the weight of the water in the pipe would also need to be included. Including all of the external forces in the x-direction on left side of Equation (6.2) and recognizing that V1x=V1 and V2x=V2cos\\(\\theta\\) produces: \\[P_1A_1-P_2A_2cos\\theta-R_x=\\rho{Q}\\left(V_{2}cos\\theta-V_{1}\\right)\\] Rearranging to solve for Rx gives Equation (6.3). \\[\\begin{equation} R_x=P_1A_1-P_2A_2cos\\theta-\\rho{Q}\\left(V_{2}cos\\theta-V_{1}\\right) \\tag{6.3} \\end{equation}\\] Similarly in the y-direction Equation (6.4) can be assembled, noting that V1y=0 and V2y=\\(-\\)V2sin\\(\\theta\\) . \\[\\begin{equation} R_y=P_2A_2sin\\theta-\\rho{Q}\\left(-V_{2}sin\\theta\\right) \\tag{6.4} \\end{equation}\\] This can be set up in R in many ways, such as the following. #Input Data -- ensure units are consistent in ft, lbf (pound force), sec D1 <- units::set_units(18/12, ft) D2 <- units::set_units(18/12, ft) P1 <- units::set_units(60*144, lbf/ft^2) #convert psi to lbf/ft^2 P2 <- units::set_units(60*144, lbf/ft^2) theta <- 30*(pi/180) #convert to radians for sin, cos functions rho <- hydraulics::dens(T=68, units="Eng", ret_units = TRUE) # calculations - vary flow from 0 to 20 ft^3/s Q <- units::set_units(seq(0,20,1), ft^3/s) A1 <- pi/4*D1^2 A2 <- pi/4*D2^2 V1 <- Q/A1 V2 <- Q/A2 Rx <- P1*A1-P2*A2*cos(theta)-rho*Q*(V2*cos(theta)-V1) Ry <- P2*A2*sin(theta)-rho*Q*(-V2*sin(theta)) R <- sqrt(Rx^2 + Ry^2) plot(Q,R) When Q=0, only the pressure terms contribute to R. This plot shows that for typical water main conditions the change in direction of the velocity vectors adds a small amount (less than 3% in this example) to the calculated R value. This is why design guidelines for water mains often neglect the velocity term in Equation (6.2). In other industrial or laboratory conditions it may not be valid to neglect that term. "],["pumps-and-how-they-operate-in-a-hydraulic-system.html", "Chapter 7 Pumps and how they operate in a hydraulic system 7.1 Defining the system curve 7.2 Defining the pump characteristic curve 7.3 Finding the operating point", " Chapter 7 Pumps and how they operate in a hydraulic system For any system delivering water through circular pipes with the assistance of a pump, the selection of the pump requires a consideration of both the pump characteristics and the energy required to deliver different flow rates through the system. These are described by the system and pump characteristic curves. Where they intersect defines the operating point, the flow and (energy) head at which the pump would operate in that system. 7.1 Defining the system curve Figure 7.1: A simple hydraulic system (from https://www.castlepumps.com) For a simple system the loss of head (energy per unit weight) due to friction, \\(h_f\\), is described by the Darcy-Weisbach equation, which can be simplified as in Equation (7.1). \\[\\begin{equation} h_f = \\frac{fL}{D}\\frac{V^2}{2g} = \\frac{8fL}{\\pi^{2}gD^{5}}Q^{2} = KQ{^2} \\tag{7.1} \\end{equation}\\] The total dynamic head the system requires a pump to provide, \\(h_p\\), is found by solving the energy equation from the upstream reservoir (point 1) to the downstream reservoir (point 2), as in Equation (7.2). \\[\\begin{equation} h_p = \\left(z+\\frac{P}{\\gamma}+\\frac{V^2}{2g}\\right)_2 - \\left(z+\\frac{P}{\\gamma}+\\frac{V^2}{2g}\\right)_1+h_f \\tag{7.2} \\end{equation}\\] For the simple system in Figure 7.1, the velocity can be considered negligible in both reservoirs 1 and 2, and the pressures at both reservoirs is atmospheric, so the Equation (7.2) can be simplified to (7.3). \\[\\begin{equation} h_p = \\left(z_2 - z_1\\right) + h_f=h_s+h_f=h_s+KQ^2 \\tag{7.3} \\end{equation}\\] Using the hydraulics package, the coefficient, K, can be calculated manually or using other package functions for friction loss in a pipe system using \\(Q=1\\). Using this to develop a system curve is demonstrated in Example 7.1. Example 7.1 Develop a system curve for a pipe with a diameter of 20 inches, length of 3884 ft, and absolute roughness of 0.0005 ft. Use kinematic viscocity, \\(\\nu\\) = 1.23 x 10-5 ft2/s. Assume a static head, z2 - z1 = 30 ft. ans <- hydraulics::darcyweisbach(Q = 1,D = 20/12, L = 3884, ks = 0.0005, nu = 1.23e-5, units = "Eng") cat(sprintf("Coefficient K: %.3f\\n", ans$hf)) #> Coefficient K: 0.160 scurve <- hydraulics::systemcurve(hs = 30, K = ans$hf, units = "Eng") print(scurve$eqn) #> [1] "h == 30 + 0.16*Q^2" For this function of the hydraulics package, Q is either in ft\\(^3\\)/s or m\\(^3\\)/s, depending on whether Eng or SI is specified for units. Often data for flows in pumping systems are in other units such as gpm or liters/s, so unit conversions would need to be applied. 7.2 Defining the pump characteristic curve The pump characteristic curve is based on data or graphs obtained from a pump manufacturer, such as that depicted in Figure 7.2. Figure 7.2: A sample set of pump curves (from https://www.gouldspumps.com). The three red dots are points selected to approximate the curve The three selected points, selected manually across the range of the curve, are used to generate a polynomial fit to the curve. There are many forms of equations that could be used to fit these three points to a smooth, continuous curve. Three common ones are implemented in the hydraulics package, shown in Table 7.1. Table 7.1: Common equation forms for pump characteristic curves. type Equation poly1 \\(h=a+{b}{Q}+{c}{Q}^2\\) poly2 \\(h=a+{c}{Q}^2\\) poly3 \\(h_{shutoff}+{c}{Q}^2\\) The \\(h_{shutoff}\\) value is the pump head at \\(Q={0}\\). Many methods can be used to fit a polynomial to a set of points. The hydraulics package includes the pumpcurve function for this purpose. The coordinates of the points can be input as numeric vectors, being careful to use correct units, consistent with those used for the system curve. Manufacturer’s pump curves often use units for flow that are not what the hydraulics package needs, and the units package provides a convenient way to convert them as needed. Developing the pump characteristic curve using the hydraulics package is demonstrated in Example 7.2. Example 7.2 Develop a pump characteristic curve for the pump in Figure 7.2, using the three points marked in red. Use the poly2 form from Table 7.1. qgpm <- units::set_units(c(0, 5000, 7850), gallons/minute) #Convert units to those needed for package, and consistent with system curve qcfs <- units::set_units(qgpm, ft^3/s) #Head units, read from the plot, are already in ft so setting units is not needed hft <- c(81, 60, 20) pcurve <- hydraulics::pumpcurve(Q = qcfs, h = hft, eq = "poly2", units = "Eng") print(pcurve$eqn) #> [1] "h == 82.5 - 0.201*Q^2" The function pumpcurve returns a pumpcurve object that includes the polynomial fit equation and a simple plot to check the fit. This can be plotted as in Figure 7.3 pcurve$p Figure 7.3: A pump characteristic curve 7.3 Finding the operating point The two curves can be combined to find the operating point of the selected pump in the defined system. this can be done by plotting them manually, solving the equations simultaneously, or by using software. The hydraulics package finds the operating point using the system and pump curves defined earlier. Example 7.3 shown how this is done. Example 7.3 Find the operating point for the pump and system curves developed in Examples 7.1 and 7.2. oppt <- hydraulics::operpoint(pcurve = pcurve, scurve = scurve) cat(sprintf("Operating Point: Q = %.3f, h = %.3f\\n", oppt$Qop, oppt$hop)) #> Operating Point: Q = 12.051, h = 53.285 The function operpoint function returns an operpoint object that includes the a plot of both curves. This can be plotted as in Figure 7.4 oppt$p Figure 7.4: The pump operating point "],["the-hydrologic-cycle-and-precipitation.html", "Chapter 8 The hydrologic cycle and precipitation 8.1 Precipitation observations 8.2 Precipitation frequency 8.3 Precipitation gauge consistency – double mass curves 8.4 Precipitation interpolation and areal averaging", " Chapter 8 The hydrologic cycle and precipitation All of the earlier chapters of this book dealt with the behavior of water in different hydraulic systems, such as canals or pipes. Now we consider the bigger picture of where the water originates, and ultimately how we can estimate how much water is available for different uses, and how much excess (flood) water systems will need to be designed and built to accommodate. A fundamental concept is the hydrologic cycle, depicted in Figure 8.1. Figure 8.1: The hydrologic cycle, from the USGS The primary variable in the hydrologic cycle from an engineering perspective is precipitation, since that is the source of the water used and managed in engineered systems. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 8.1 Precipitation observations Direct measurement of precipitation is done with precipitation gauges, such as shown in Figure 8.2. Figure 8.2: National Weather Service standard 8-inch gauge (source: NWS). Precipitation can vary dramatically over short distances, so point measurements are challenging to work with when characterizing rainfall over a larger area. An image from an atmospheric river event over California is shown in Figure 8.3. Reflectivity values are converted to precipitation rates based on calibration with rain gauge observations. Figure 8.3: A raw radar image showing reflectivity values. Red squared indicated weather radar locations (source: NOAA). There are additional data sets that merge many sources of data to create continuous (spatially and temporally) datasets of precipitation. While these provide excellent resources for large scale studies, we will initially focus on point observations. Obtaining precipitation data can be done in many ways. Example 8.1 demonstrates one method using R using the FedData package. Example 8.1 Characterize the rainfall in the city of San Jose, in Santa Clara County. For the U.S., a good starting point is to use the mapping tools at the NOAA Climate Data Online (CDO) website. From the mapping tools page, select Observations: Daily ensure GHCN Daily is checked so you’ll look for stations that are part of the Global Historical Climatology Network and search for San Jose, CA. Figure 8.4 shows the three stations that lie within the rectangle sketched on the map, and the one that was selected. Figure 8.4: Selection results for a portion of San Jose, CA (source: CDO). The data can be downloaded directly from the CDO site as a csv file, a sample of which is included with the hydromisc package (the sample also includes air temperature data). Note the units that you specify for the data since they will not appear in the csv file. Note that this initial station search and data download can be automated in R using other packages: Using the FedData package, following a method similar to this. Using the rnoaa package, referring to the vignettes. While formats will vary depending on the source of the data, in this example we can import the csv file directly. Since units were left as ‘standard’ on the CDO website, precipitation is in inches and temperatures in oF. datafile <- system.file("extdata", "cdo_data_ghcn_23293.csv", package="hydromisc") ghcn_data <- read.csv(datafile,header=TRUE) A little cleanup of the data needs to be done to ensure the DATE column is in date format, and change any missing values (often denoted as 9999 or -9999) to NA. With missing values flagged as NA, R can ignore them, set them to zero, or fill them in with functions like the zoo::na.approx() or na.spline() functions, or using the more sophisticated imputeTS package. finally, add a ‘water year’ column (a water year begins on October 1 and ends September 30). ghcn_data$DATE <- as.Date(ghcn_data$DATE, format="%Y-%m-%d") ghcn_data$PRCP[ghcn_data$PRCP <= -999 | ghcn_data$PRCP >= 999 ] = NA wateryr <- function(d) { if (as.numeric(format(d, "%m")) >= 10) { wy = as.numeric(format(d, "%Y")) + 1 } else { wy = as.numeric(format(d, "%Y")) } } ghcn_data$wy <- sapply(ghcn_data$DATE, wateryr) A convenient package for characterizing precipitation is hydroTSM, the output of which is shown in Figure 8.5 library(hydroTSM) #create a simple data frame for plotting ghcn_prcp <- data.frame(date = ghcn_data$DATE, prcp = ghcn_data$PRCP ) #convert it to a zoo object x <- zoo::read.zoo(ghcn_prcp) hydroTSM::hydroplot(x, var.type="Precipitation", main="", var.unit="inch", pfreq = "ma", from="1999-01-01", to="2022-12-31") Figure 8.5: Monthly and annual precipitation summary for San Jose, CA for 1999-2022 This presentation shows the seasonality of rainfall in San Jose, with most falling between October and May. The mean is about 12 inches per year, with most years experiencing between 10-15 inches of precipitation. There are functions to produce many statistics such as monthly means. #calculate monthly sums monsums <- hydroTSM::daily2monthly(x, sum, na.rm = TRUE) monavg <- as.data.frame(hydroTSM::monthlyfunction(monsums, mean, na.rm = TRUE)) #if record begins in a month other than January, need to reorder monavg <- monavg[order(factor(row.names(monavg), levels = month.abb)),,drop=FALSE] colnames(monavg)[1] <- "Avg monthly precip, in" knitr::kable(monavg, digits = 2) |> kableExtra::kable_paper(bootstrap_options = "striped", full_width = F) Avg monthly precip, in Jan 2.23 Feb 2.26 Mar 1.75 Apr 1.03 May 0.26 Jun 0.10 Jul 0.00 Aug 0.00 Sep 0.10 Oct 0.60 Nov 1.21 Dec 2.31 The winter of 2016-2017 (water year 2017) was a record wet year for much of California. Figure 8.6 shows a hyetograph the daily values for that year. library(ggplot2) ghcn_prcp2 <- data.frame(date = ghcn_data$DATE, wy = ghcn_data$wy, prcp = ghcn_data$PRCP ) ggplot(subset(ghcn_prcp2, wy==2017), aes(x=date, y=prcp)) + geom_bar(stat="identity",color="red") + labs(x="", y="precipitation, inch/day") + scale_x_date(date_breaks = "1 month", date_labels = "%b %d") Figure 8.6: Daily Precipitation for San Jose, CA for water year 2017 While many other statistics could be calculated to characterize precipitation, only a handful more will be shown here. One will use a convenient function of the seas package. This is used in Figure 8.7. library(tidyverse) #The average precipitation rate for rainy days (with more then 0.01 inch) avgrainrate <- ghcn_prcp2[ghcn_prcp2$prcp > 0.01,] |> group_by(wy) |> summarise(prcp = mean(prcp)) #the number of rainy days per year nraindays <- ghcn_prcp2[ghcn_prcp2$prcp > 0.01,] |> group_by(wy) |> summarise(nraindays = length(prcp)) #Find length of consecutive dry and wet spells for the record days.dry.wet <- seas::interarrival(ghcn_prcp, var = "prcp", p.cut = 0.01, inv = FALSE) #add a water year column to the result days.dry.wet$wy <- sapply(days.dry.wet$date, wateryr) res <- days.dry.wet |> group_by(wy) |> summarise(cdd = median(dry, na.rm=TRUE), cwd = median(wet, na.rm=TRUE)) res_long <- pivot_longer(res, -wy, names_to="statistic", values_to="consecutive_days") ggplot(res_long, aes(x = wy, y = consecutive_days)) + geom_bar(aes(fill = statistic),stat = "identity", position = "dodge")+ xlab("") + ylab("Median consecutive days") Figure 8.7: Median concecutive dry days (cdd) and wet days (cwd) for each water year. 8.2 Precipitation frequency For engineering design, the uncertainty in predicting extreme rainfall, floods, or droughts is expressed as risk, typically the probability that a certain event will be equalled or exceeded in any year. The return period, T, is the inverse of the probability of exceedence, so that a storm with a 10% chance of being exceeded in any year (\\(p_{exceed}~=0.10\\)) is a \\(T=\\frac{1}{0.10}=10\\) year storm. A 10-year storm can be experienced in multiple consecutive years, so it only means that, on average over very long periods (in a stationary climate) one would expect to see one event every T years. In the U.S., precipitation frequency statistics are available at the NOAA Precipitation Frequency Data Server (PFDS). An example of the graphical data available there is shown in Figure 8.8. Figure 8.8: Intensity-duration-frequency (IDF) curves from the NOAA PFDS. The calculations performed to produce the IDF curves use decades of daily data, because many years are needed to estimate the frequency with which an event might occur. As a demonstration, however, a single year can be used to illustrate the relationship between intensity and duration, which for durations longer than about 2 hours (McCuen, 2016) can be expressed as in Equation (8.1). \\[\\begin{equation} i = aD^b \\tag{8.1} \\end{equation}\\] As a power curve, Equation (8.1) should be a straight line on a log-log plot. This is shown in Example 8.2. Example 8.2 Use the 2017 water year of rainfall data for the city of San Jose, to plot the relationship between intensity and duration for the 1, 3, 7, and 30-day events. Begin by calculating the necessary intensity and duration values. #First extract one water year of data df.one.year <- subset(ghcn_prcp, date>=as.Date("2016-10-01") & date<=as.Date("2017-09-30")) #Calculate the running mean value for the defined durations dur <- c(1,3,7,30) px <- numeric(length(dur)) for (i in 1:4) { px[i] <- max(zoo::rollmean(df.one.year$prcp,dur[i])) } #create the intensity-duration data frame df.id <- data.frame(duration=dur,intensity=px) Fit the theoretical curve (Equation (8.1)) using the nonlinear least squares function of the stats package (included with a base R installation), and plot the results. #fit a power curve to the data fit <- stats::nls(intensity ~ a*duration^b, data=df.id, start=list(a=1,b=-0.5)) print(signif(coef(fit),3)) #> a b #> 1.850 -0.751 #find estimated y-values using the fit df.id$intensity_est <- predict(fit, list(x = df.id$duration)) #duration-intensity plot with base graphics plot(x=df.id$duration,y=df.id$intensity,log='xy', pch=1, xaxt="n", xlab="Duration, day" , ylab="Intensity, inches/day") lines(x=df.id$duration,y=df.id$intensity_est,lty=2) abline( h = c(seq( 0.1,1,0.1),2.0), lty = 3, col = "lightgray") abline( v = c(1,2,3,4,5,7,10,15,20,30), lty = 3, col = "lightgray") axis(side = 1, at =c(1,2,3,4,5,7,10,15,20,30) ,labels = T) axis(side = 2, at =c(seq( 0.1,1,0.1),2.0) ,labels = T) Figure 8.9: Intensity-duration relationship for water year 2017. Calculated values are based on daily data; theoretical is the power curve fit. If this were done for many years, the results for any one duration could be combined (one value per year) and sorted in decreasing order. That means the rank assigned to the highest value would be 1, and the lowest value would be the number of years, n. The return period, T, for any event would then be found using Equation (8.2) based on the Weibull plotting position formula. \\[\\begin{equation} T=\\frac{n+1}{rank} \\tag{8.2} \\end{equation}\\] That would allow the creation of IDF curves for a point. 8.3 Precipitation gauge consistency – double mass curves The method of using double mass curves to identify changes in an obervation method (such as new instrumentation or a change of location) can be applied to precipitation gauges or any other type of measurement. This method is demonstrated with an example from the U.S. Geological survey (Searcy & Hardison, 1960). The first step is to compile data for a gauge (or better, a set of gauges) that are known to be unperturbed (Station A in the sample data set), and for a suspect gauge though to have experienced a change (Station X is this case). annual_data <- hydromisc::precip_double_mass knitr::kable(annual_data, digits = 2) |> kableExtra::kable_paper(bootstrap_options = "striped", full_width = F) Year Station_A Station_X 1926 39.75 32.85 1927 29.57 28.08 1928 42.01 33.51 1929 41.39 29.58 1930 31.55 23.76 1931 55.54 58.39 1932 48.11 46.24 1933 39.85 30.34 1934 45.40 46.78 1935 44.89 38.06 1936 32.64 42.82 1937 45.87 37.93 1938 46.05 50.67 1939 49.76 46.85 1940 47.26 50.52 1941 37.07 34.38 1942 45.89 47.60 Accumulate the (annual) precipitation (measured in inches) and plot the values for the suspect station against the reference station(s), as in Figure 8.10 . annual_sum <- data.frame(year = annual_data$Year, sum_A = cumsum(annual_data$Station_A), sum_X = cumsum(annual_data$Station_X)) #create scatterplot with a label on every point library(ggplot2) library(ggrepel) #> Warning: package 'ggrepel' was built under R version 4.2.3 ggplot(annual_sum, aes(sum_X,sum_A, label = year)) + geom_point() + geom_text_repel(size=3, direction = "y") + labs(x="Cumulative precipitation at Station A, in", y="Cumulative precipitation at Station X, in") + theme_bw() Figure 8.10: A double mass curve. The break in slope between 1930 and 1931 appears clear. This should checked with records for the station to verify whether changes did occur at that time. If the data from Station X are to be used to fill other records or estimate long-term averages, the inconsistency needs to be corrected. One method to highlight the year at which the break occurs is to plot the residuals from a best fit line to the cumulative data from the two stations, as illustrated by the Food and Agriculture Orgainization FAO. (Allen & United Nations, 1998) linfit = lm(sum_X ~ sum_A, data = annual_sum) plot(x=annual_sum$year,linfit$residuals, xlab = "Year",ylab = "Residual of regression") Figure 8.11: Residuals of the linear fit to the double-mass curve. This verifies that after 1930 the steep decline ends, so it may represent a change in location or equipment. Adusting the earlier record to be consistent with the later period is done by applying Equation (8.3). \\[\\begin{equation} y^{'}_i~=\\frac{b_2}{b_1}y_i \\tag{8.3} \\end{equation}\\] where b2 and b1 are the slopes after and before the break in slope, respectively, yi is original precipitation data, and y’i is the adjusted precipitation. This can be applied as follows. b1 <- lm(sum_X ~ sum_A, data = subset(annual_sum, year <= 1930))$coefficients[['sum_A']] b2 <- lm(sum_X ~ sum_A, data = subset(annual_sum, year > 1930))$coefficients[['sum_A']] #Adjust early values and concatenate to later values for Station X adjusted_X <- c(annual_data$Station_X[annual_data$Year <= 1930]*b2/b1, annual_data$Station_X[annual_data$Year > 1930]) annual_sum_adj <- data.frame(year = annual_data$Year, sum_A = cumsum(annual_data$Station_A), sum_X = cumsum(adjusted_X)) #Check that slope now appears more consistent ggplot(annual_sum_adj, aes(sum_X,sum_A, label = year)) + geom_point() + geom_text_repel(size=3, direction = "y") + labs(x="Cumulative precipitation at Station A, in", y="Cumulative adjusted precipitation at Station X, in") + theme_bw() Figure 8.12: A double mass curve using adjusted data at Station X. The plot shows a more consistent slope, as expected. Another plot of residuals could also validate the effect of the adjustment. 8.4 Precipitation interpolation and areal averaging It is rare that there are precipitation observations exactly where one needs data, which means existing observations must be interpolated to a point of interest. This is also used to fill in missing data in a record using surrounding observations. Interpolation is also used to use sparse observations, or observations from a variety of sources, to produce a spatially continuous grid. This is an essential step to estimating the precipitation averaged across an area that contributes streamflow to some location of concern. Estimating areal average precipitation using some simple, manual methods, has been outlined by the U.S. National Weather Service, illustrated in Figure 8.13 (source: National Weather Service). Figure 8.13: Some basic precipitation interpolation methods, from the U.S. National Weather Service. With the advent of geographical information system (GIS) software, manual interpolation is not used. Rather, more advanced spatial analysis is performed to interpolate precipitation onto a continuous grid, where the uncertainty (or skill) of different methods can be assessed. Spatial analysis methods to do this are outlined in many other references, such as Spatial Data Science and the related book Spatial Data Science with applications in R, or the reference Geocomputation with R. (Lovelace et al., 2019; Pebesma & Bivand, 2023) There are also many sources of precipitation data already interpolated to a regular grid. the geodata package provides access to many data sets, including the Worldclim biophysical data. Another source of global precipitation data, available at daily to monthly scales, is the CHIRPS data set, which has been widely used in many studies. An example of obtaining and plotting average annual precipitation over Santa Clara County is illustrated below. #Load precipitation in mm, already cropped to cover most of California datafile <- system.file("extdata", "prcp_cropped.tif", package="hydromisc") prcp <- terra::rast(datafile) scc_bound <- terra::vect(hydromisc::scc_county) scc_precip <- terra::crop(prcp, scc_bound) terra::plot(scc_precip, plg=list(title="Precip\\n(mm)", title.cex=0.7)) terra::plot(scc_bound, add=TRUE) Figure 8.14: Annual Average Precipitation over Santa Clara County, mm Spatial statistics are easily obtained using terra, a versatile package for spatial analysis. terra::summary(scc_precip) #> chirps.v2.0.1981.2020.40yrs #> Min. : 197.1 #> 1st Qu.: 354.9 #> Median : 447.9 #> Mean : 542.3 #> 3rd Qu.: 652.3 #> Max. :1297.2 #> NA's :5 "],["fate-of-precipitation.html", "Chapter 9 Fate of precipitation 9.1 Interception 9.2 Infiltration 9.3 Evaporation 9.4 Snow 9.5 Watershed analysis", " Chapter 9 Fate of precipitation As precipitation falls and can be caught on vegetation (interception), percolate into the ground (infiltration), return to the atmosphere (evaporation), or become available as runoff (if accumulating as rain or snow). The landscape (land cover and topography) and the time scale of study determine what processes are important. For example, for estimating runoff from an individual storm, interception is likely to be small, as is evaporation. On an annual average over large areas, evaporation will often be the largest component. Comprehensive hydrology models will estimate abstractions due to infiltration and interception, either by simulating the physics of the phenomenon or by using a lumped parameter that accounts for the effects of abstractions on runoff. The hydromisc package will need to be installed to access some of the code and data used below. If it is not installed, do so following the instructions on the github site for the package. 9.1 Interception Figure 9.1: Rain interception by John Robert McPherson, CC BY-SA 4, via Wikimedia Commons Interception of rainfall is generally small during individual storms (0.5-2 mm), so it is often ignored, or lumped in with other abstractions, for analyses of flood hydrology. For areas characterized by low intensity rainfall and heavy vegetation, interception can account for a larger portion of the rainfall (for example, up to 25% of annual rainfall in the Pacific Northwest) (McCuen, 2016). 9.2 Infiltration An early empirical equation describing infiltration rate into soils was developed by Horton in 1939, which takes the form of Equation (9.1). \\[\\begin{equation} f_p~=~ f_c + \\left(f_0 - f_c\\right)e^{-kt} \\tag{9.1} \\end{equation}\\] This describes a potential infiltration rate, \\(f_p\\), beginning at a maximum \\(f_0\\) and decreasing with time toward a minimum value \\(f_c\\) at a rate described by the decay constant \\(k\\). \\(f_c\\) is also equal to the saturated hydraulic conductivity, \\(K_s\\), of the soil. If rainfall rate exceeds \\(f_c\\) then this equation describes the actual infiltration rate with time. If periods of time have rainfall less intense than \\(f_c\\) it is convenient to integrate this to relate the total cumulative depth of water infiltrated, \\(F\\), and the actual infiltration rate, \\(f_p\\), as in Equation (9.2). \\[\\begin{equation} F~=~\\left[\\frac{f_c}{k}ln\\left(f_0-f_c\\right)+\\frac{f_0}{k}\\right]-\\frac{f_c}{k}ln\\left(f_p-f_c\\right)-\\frac{f_p}{k} \\tag{9.2} \\end{equation}\\] A more physically based relationship to describe infiltration rate is the Green-Ampt model. It is based on the physical laws describing the propogation of a wetting front downward through a soil column under a ponded water surface. The Green-Ampt relationship is in Equation (9.3). \\[\\begin{equation} K_st~=~F-\\left(n-\\theta_i\\right)\\Phi_f~ln\\left[1+\\frac{F}{\\left(n-\\theta_i\\right)\\Phi_f}\\right] \\tag{9.3} \\end{equation}\\] Equation (9.3) assumes ponding begins at t=0, meaning rainfall rate exceeds \\(K_s\\). When rainfall rates are less than that, adjustments to the method are used. Parameters are shown in the table below. Figure 9.2: Green-Ampt Parameter Estimates and Ranges based on Soil Texture USACE While not demonstrated here, parameters for the Horton and Green-Ampt methods can be derived from observed infiltration data using the R package vadose. The most widely used method for estimating infiltration is the NRCS method, described in detail in the NRCS document Estimating Runoff Volume and Peak Discharge.This method describes the direct runoff (as a depth), \\(Q\\), resulting from a precipitation event, \\(P\\), as in Equation (9.4). \\[\\begin{equation} Q~=~\\frac{\\left(P-I_a\\right)^2}{\\left(P-I_a\\right)+S} \\tag{9.4} \\end{equation}\\] \\(S\\) is the maximum retention of water by the soil column and \\(I_a\\) is the initial abstraction, commonly estimated as \\(I_a=0.2S\\). Substituting this into Equation (9.4) produces Equation (9.5). \\[\\begin{equation} Q~=~\\frac{\\left(P-0.2~S\\right)^2}{\\left(P+0.8~S\\right)} \\tag{9.5} \\end{equation}\\] This relationship applies as long as \\(P>0.2~S\\); Q=0 otherwise. Values for S are derived from a Curve Number (CN), which summarizes the land cover, soil type and condition: \\[CN=\\frac{1000}{10+S}\\], where \\(S\\), and subsequently \\(Q\\), are in inches. Equation (9.5) can be rearranged to a form similar to those for the Horton and Green-Ampt equations for cumulative infiltration, \\(F\\). \\[F~=~\\frac{\\left(P-0.2~S\\right)S}{P+0.8~S}\\]. 9.3 Evaporation Evaporation is simply the change of water from liquid to vapor state. Because it is difficult to separate evaporation from the soil from transpiration from vegetation, it is usually combined into Evapotranspiration, or ET; see Figure 9.3. Figure 9.3: Schematic of ET, from CIMIS ET can be estimated in a variety of ways, but it is important first to define three types of ET: - Potential ET, \\(ET_p\\) or \\(PET\\): essentially the same as the rate that water would evaporate from a free water surface. - Reference crop ET, \\(ET_{ref}\\) or \\(ET_0\\): the rate water evaporates from a well-watered reference crop, usually grass of a standard height. - Actual ET, \\(ET\\): this is the water used by a crop or other vegetation, usually calculated by adjusting the \\(ET_0\\) term by a crop coefficient that accounts for factors such as the plant height, growth stage, and soil exposure. Estimating \\(ET_0\\) can be as uncomplicated as using the Thornthwaite equation, which depends only on mean monthly temperatures, to the Penman-Monteith equation, which includes solar and longwave radiation, wind and humidity effects, and reference crop (grass) characteristics. Inclusion of more complexity, especially where observations can supply the needed input, produces more reliable estimates of \\(ET_0\\).One of the most common implementations of the Penman-Monteith equation is the version of the FAO (FAO Irrigation and drainage paper 56, or FAO56) (Allen & United Nations, 1998). Refer to FAO56 for step-by-step instructions on determining each term in the Penman-Monteith equation, Equation (9.6). \\[\\begin{equation} \\lambda~ET~=~\\frac{\\Delta\\left(R_n-G\\right)+\\rho_ac_p\\frac{\\left(e_s-e_a\\right)}{r_a}}{\\Delta+\\gamma\\left(1+\\frac{r_s}{r_a}\\right)} \\tag{9.6} \\end{equation}\\] Open water evaporation can be calculated using the original Penman equation (1948): \\[\\lambda~E_p~=~\\frac{\\Delta~R_n+\\gamma~E_a}{\\Delta~+~\\gamma}\\] where \\(R_n\\) is the net radiation available to evaporate water and \\(E_a\\) is a mass transfer function usually including humidity (or vapor pressure deficit) and wind speed. \\(\\lambda\\) is the latent heat of vaporization of water. A common implementation of the Penman equation is \\[\\begin{equation} \\lambda~E_p~=~\\frac{\\Delta~R_n+\\gamma~6.43\\left(1+0.536~U_2\\right)\\left(e_s-e\\right)}{\\Delta~+~\\gamma} \\tag{9.7} \\end{equation}\\] Here \\(E_p\\) is in mm/d, \\(\\Delta\\) and \\(\\gamma\\) are in \\(kPa~K^{-1}\\), \\(R_n\\) is in \\(MJ~m^{−2}~d^{−1}\\), \\(U_2\\) is in m/s, and \\(e_s\\) and \\(e\\) are in kPa. Variables are as defined in FAO56. Open water evaporation can also be calculated using a modified version of the Penman-Monteith equation (9.6). In this latter case, vegetation coefficients are not needed, so Equation (9.6) can be used with \\(r_s=0\\) and \\(r_a=251/(1+0.536~u_2)\\), following Thom & Oliver, 1977. The R package Evaporation has functions to calculate \\(ET_0\\) using this and many other functions. This is especially useful when calculating PET over many points or through a long time series. 9.4 Snow 9.4.1 Observations In mountainous areas a substantial portion of the precipitation may fall as snow, where it can be stored for months before melting and becoming runoff. Any hydrologic analysis in an area affected by snow must account for the dynamics of this natural reservoir and how it affects water supply. In the Western U.S., the most comprehensive observations of snow are part of the SNOTEL (SNOw TELemetry) network. Figure 9.4: The SNOTEL network. 9.4.2 Basic snowmelt theory and simple models For snow to melt, heat must be added to first bring the snowpack to the melting point; it takes about 2 kJ/kg to increase snowpack temperature 1\\(^\\circ\\)C. Additional heat is required for the phase change from ice to water (the latent heat of fusion), about 335 kJ/kg. Heat can be provided by absorbing solar radiation, longwave radiation, ground heat, warm air, warm rain falling on the snowpack or water vapor condensing on the snow. Once snow melts, it can percolate through the snowpack and be retained, similar to water retained by soil, and may re-freeze (releasing the latent heat of fusion, which can then cause more melt). As with any other hydrologic process, there are many ways it can be modeled, from simplified empirical relationships to complex physics-based representations. While accounting for all of the many processes involved would be a robust approach, often there are not adequate observations to support their use so simpler parameterization are used. Here only the simplest index-based snow model is discussed, as in Equation (9.8). \\[\\begin{equation} M~=~K_d\\left(T_a~-~T_b\\right) \\tag{9.8} \\end{equation}\\] M is the melt rate in mm/d (or in/day), \\(T_a\\) is air temperature (sometimes a daily mean, sometimes a daily maximum), \\(T_b\\) is a base temperature, usually 0\\(^\\circ\\)C (or 32\\(^\\circ\\)F), and \\(K_d\\) is a degree-day melt factor in mm/d/\\(^\\circ\\)C (or in/d/\\(^\\circ\\)F). The melt factor, \\(K_d\\), is highly dependent on local conditions and on the time of year (as an indicator of the snow pack condition); different \\(K_d\\) factors can be used for different months for example. Refreezing of melted snow, when temperatures are below \\(T_b\\), can also be estimated using an index model, such as Equation (9.9). \\[\\begin{equation} Fr~=~K_f\\left(T_b~-~T_a\\right) \\tag{9.9} \\end{equation}\\] Importantly, temperature-index snowmelt relations have been developed primarily for describing snowmelt at the end of season, after the peak of snow accumulation (typically April-May in the mountainous western U.S.), and their use during the snow accumulation season may overestimate melt. Different degree-day factors are often used, with the factors increasing later in the melt season. From a hydrologic perspective, the most important snow quality is the snow water equivalent (SWE), which is the depth of water obtained by melting the snow. An example of using a snowmelt index model follows. Example 9.1 Manually calibrate an index snowmelt model for a SNOTEL site using one year of data. Visit the SNOTEL to select a site. In this example site 1050, Horse Meadow, located in California, is used. Next download the data using the snotelr package (install the package first, if needed). sta <- "1050" snow_data <- snotelr::snotel_download(site_id = sta, internal = TRUE) Plot the data to assess the period available and how complete it is. plot(as.Date(snow_data$date), snow_data$snow_water_equivalent, type = "l", xlab = "Date", ylab = "SWE (mm)") Figure 9.5: Snow water equivalent at SNOTEL site 1050. Note the units are SI. If you download data directly from the SNOTEL web site the data would be in conventional US units. snotelr converts the data to SI units as it imports. The package includes a function snotel_metric that could be used to convert raw data downloaded from the SNOTEL website to SI units. For this exercise, extract a single (water) year, meaning from 1-Oct to 30-Sep, so an entire winter is in one year. In addition, create a data frame that only includes columns that are needed. snow_data_subset <- subset(snow_data, as.Date(date) > as.Date("2008-10-01") & as.Date(date) < as.Date("2009-09-30")) snow_data_sel <- subset(snow_data_subset, select=c("date", "snow_water_equivalent", "precipitation", "temperature_mean", "temperature_min", "temperature_max")) plot(as.Date(snow_data_sel$date),snow_data_sel$snow_water_equivalent, type = "l",xlab = "Date", ylab = "SWE (mm)") grid() Figure 9.6: Snow water equivalent at SNOTEL site 1050 for water year 2009. Now use a snow index model to simulate the SWE based on temperature and precipitation. The model used here is a modified version of that used in the hydromad package. The snow.sim command is used to run a snow index model; type ?hydromisc::snow.sim for details on its use. As a summary, the four main parameters you can adjust in the calibration of the model are: The maximum air temperature for snow, Tmax. Snow can fall at air temperatures above as high as about 3\\(^\\circ\\)C, but Tmax is usually lower. The minimum air temperature for rain, Tmin. Rain can fall when near surface air temperatures are below freezing. This may be as low as -1\\(^\\circ\\)C or maybe just a little lower, and as high as 1\\(^\\circ\\)C. Base temperature, Tmelt, the temperature at which melt begins. Usually the default of 0\\(^\\circ\\)C is used, but some adjustment (generally between -2 and 2\\(^\\circ\\)C) can be applied to improve model calibration. Snow Melt (Degree-Day) Factor, kd, which describes the melting of the snow when temperatures are above freezing. Be careful using values from different references as these are dependent on units. Typical values are between 1 and 5 mm/d/\\(^\\circ\\)C. Two additional parameters are optional; their effects are typically small. Degree-Day Factor for freezing, kf, of liquid water in the snow pack when temperatures are below freezing. By default it is set to 1\\(^\\circ\\)C/mm/day, and may vary from 0 to 2 \\(^\\circ\\)C/mm/day. Snow water retention factor, rcap. When snow melts some of it can be retained via capillarity in the snow pack. It can re-freeze or drain out. This is expressed as a fraction of the snow pack that is frozen. The default is 2.5% (rcap = 0.025). Start with some assumed values and run the snow model. Tmax_snow <- 3 Tmin_rain <- 2 kd <- 1 snow_estim <- hydromisc::snow.sim(DATA=snow_data_sel, Tmax=Tmax_snow, Tmin=Tmin_rain, kd=kd) Now the simulated values can be compared to the observations. If not installed already, install the hydroGOF package, which has some useful functions for evaluating how well modeled output fits observations. In the plot that follows we specify three measures of goodness-of-fit: Mean Absolute Error (MAE) Percent Bias (PBIAS) Root Mean Square Error divided by the Standard Deviation (RSR) These are discussed in detail in other references, but the aim is to calibrate (change the input parameters) until these values are low. obs <- snow_data_sel$snow_water_equivalent sim <- snow_estim$swe_simulated hydroGOF::ggof(sim, obs, na.rm = TRUE, dates=snow_data_sel$date, gofs=c("MAE", "RMSE", "PBIAS"), xlab = "", ylab="SWE, mm", tick.tstep="months", cex=c(0,0),lwd=c(2,2)) Figure 9.7: Simulated and Observed SWE at SNOTEL site 1050 for water year 2009. Melt is overestimated in the early part of the year and underestimated during the melt season, showing why a single index is not a very robust model. Applying two kd values, one for early to mid snow season and another for later snowmelt could improve the model, but it would make it less useful for using the model in other situations such as increased temperatures. 9.4.3 Snow model calibration While manual model calibration can improve the fit, a more complete calibration involves optimization methods that search the parameter space for the optimal combination of parameter values. A useful tool for doing that is the optim function, part of the stats package installed with base R. Using the optimization package requires establishing a function that should be minimized, where the parameters to be included in the optimization are the first argument. The optim function requires you to explicitly give ranges over which parameters can be varied, via the upper and lower arguments. An example of this follows, where the four main model parameters noted above are used, and the MAE is minimized. fcn_to_minimize <- function(par,datain, obs){ snow_estim <- hydromisc::snow.sim(DATA=datain, Tmax=par[1], Tmin=par[2], kd=par[3], Tmelt=par[4]) calib.stats <- hydroGOF::gof(snow_estim$swe_simulated,obs,na.rm=TRUE) objective_stat <- as.numeric(calib.stats['MAE',]) return(objective_stat) } opt_res <- optim(par=c(0.5,1,1,0),fn=fcn_to_minimize, lower=c(-1,-1,0.5,-2), upper=c(3,1,5,3), method="L-BFGS-B", datain=snow_data_sel, obs=obs) #print out optimal parameters - note Tmax and Tmin can be reversed during optimization cat(sprintf("Optimal parameters:\\nTmax=%.1f\\nTmin=%.1f\\nkd=%.2f\\nTmelt=%.1f\\n", max(opt_res$par[1],opt_res$par[2]),min(opt_res$par[1],opt_res$par[2]), opt_res$par[3],opt_res$par[4])) #> Optimal parameters: #> Tmax=1.0 #> Tmin=0.5 #> kd=1.05 #> Tmelt=-0.0 The results using the optimal parameters can be plotted to visualize the simulation. snow_estim_opt <- hydromisc::snow.sim(DATA=snow_data_sel, Tmax=max(opt_res$par[1],opt_res$par[2]), Tmin=min(opt_res$par[1],opt_res$par[2]), kd=opt_res$par[3], Tmelt=opt_res$par[4]) obs <- snow_data_sel$snow_water_equivalent sim <- snow_estim_opt$swe_simulated hydroGOF::ggof(sim, obs, na.rm = TRUE, dates=snow_data_sel$date, gofs=c("MAE", "RMSE", "PBIAS"), xlab = "", ylab="SWE, mm", tick.tstep="months", cex=c(0,0),lwd=c(2,2)) Figure 9.8: Optimal simulation of SWE at SNOTEL site 1050 for water year 2009. It is clear that a simple temperature index model cannot capture the snow dynamics at this location, especially during the winter when melt is significantly overestimated. 9.4.4 Estimating climate change impacts on snow Once a reasonable calibration is obtained, the effect of increasing temperatures on SWE can be simulated by including the deltaT argument in the hydromisc::snow.sim command. Here a 3\\(^\\circ\\)C uniform temperature increase is imposed on the optimal parameterization above. dT <- 3.0 snow_plus3 <- hydromisc::snow.sim(DATA=snow_data_sel, Tmax=max(opt_res$par[1],opt_res$par[2]), Tmin=min(opt_res$par[1],opt_res$par[2]), kd=opt_res$par[3], Tmelt=opt_res$par[4], deltaT = dT) simplusdT <- snow_plus3$swe_simulated # plot the results dTlegend <- expression("Simulated"*+3~degree*C) plot(as.Date(snow_data_sel$date),obs,type = "l",xlab = "", ylab = "SWE (mm)") lines(as.Date(snow_estim$date),sim,lty=2,col="blue") lines(as.Date(snow_estim$date),simplusdT,lty=3,col="red") legend("topright", legend = c("Observed", "Simulated",dTlegend), lty = c(1,2,3), col=c("black","blue","red")) grid() Figure 9.9: Observed SWE and simulated with observed meteorology and increased temperatures. 9.5 Watershed analysis Whether precipitation falls as rain or snow, how much is absorbed by plants, consumed by evapotranspiration, and what is left to become runoff, is all determined by watershed characteristics. This can include: Watershed area Slope of terrain Elevation variability (a hypsometric curve) Soil types Land cover Collecting this information begins with obtaining a digital elevation model for an area, identifying any key point or points on a stream (a watershed outlet), and then delineating the area that drains to that point. This process of watershed delineation is often done with GIS software like ArcGIS or QGIS. The R package WhiteboxTools provides capabilities for advanced terrain analysis in R. Demonstrations of the use of these tools for a watershed are in the online book Hydroinformatics at VT by JP Gannon. In particular, the chapters on mapping a stream network and delineating a watershed are excellent resources for exploring these capabilities in R. For watersheds in the U.S., watersheds, stream networks, and attributes of both can be obtained and viewed using nhdplusTools. Land cover and soil information can be obtained using the FedData package. "],["designing-for-floods-flood-hydrology.html", "Chapter 10 Designing for floods: flood hydrology 10.1 Engineering design requires probability and statistics 10.2 Estimating floods when you have peak flow observations - flood frequency analysis 10.3 Estimating floods from precipitation", " Chapter 10 Designing for floods: flood hydrology Figure 10.1: The international bridge between Fort Kent, Maine and Clair, New Brunswick during a flood (source: NOAA) Flood hydrology is generally the description of how frequently a flood of a certain level will be exceeded in a specified period. This was discussed briefly in the section on precipitation frequency, Section 8.2. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 10.1 Engineering design requires probability and statistics Before diving into peak flow analysis, it helps to refresh your background in basic probability and statistics. Some excellent resources for this using R as the primary tool are: A very brief tutorial, by Prof. W.B. King A more thorough text, by Prof. G. Jay Kerns. This has a companion R package. An online flood hydrology reference based in R, by Prof. Helen Fairweather Rather than repeat what is in those references, a couple of short demonstrations here will show some of the skills needed for flood hydrology. The first example illustrates binomial probabilities, which are useful for events with only two possible outcomes (e.g., a flood happens or it doesn’t), where each outcome is independent and probabilities of each are constant. R functions for distributions use a first letter to designate what it returns: d is the density, p is the (cumulative) distribution, q is the quantile, r is a random sequence. In R the defaults for probabilities are to define them as \\(P[X~\\le~x]\\), or a probability of non-exceedance. Recall that a probability of exceedance is simply 1 - (probability of non-exceedance), or \\(P[X~\\gt~x] ~=~ 1-P[X~\\le~x]\\). In R, for quantiles or probabilities (using functions beginning with q or p like pnorm or qlnorm) setting the argument lower.tail to FALSE uses a probability of exceedance instead of non-exceedance. Example 10.1 A temporary dam is constructed while a repair is built. It will be in place 5 years and is designed to protect against floods up to a 20-year recurrence interval (i.e., there is a \\(p=\\frac{1}{20}=0.05\\), or 5% chance, that it will be exceeded in any one year). What is the probability of (a) no failure in the 5 year period, and (b) at least two failures in 5 years. # (a) ans1 <- dbinom(0, 5, 0.05) cat(sprintf("Probability of exactly zero occurrences in 5 years = %.4f %%",100*ans1)) #> Probability of exactly zero occurrences in 5 years = 77.3781 % # (b) ans2 <- 1 - pbinom(1,5,.05) # or pbinom(1,5,.05, lower.tail=FALSE) cat(sprintf("Probability of 2 or more failures in 5 years = %.2f %%",100*ans2)) #> Probability of 2 or more failures in 5 years = 2.26 % While the next example uses normally distributed data, most data in hydrology are better described by other distributions. Example 10.2 Annual average streamflows in some location are normally distributed with a mean annual flow of 20 m\\(^3\\)/s and a standard deviation of 6 m\\(^3\\)/s. Find (a) the probability of experiencing a year with less than (or equal to) 10 m\\(^3\\)/s, (b) greater than 32 m\\(^3\\)/s, and (c) the annual average flow that would be expected to be exceeded 10% of the time. # (a) ans1 <- pnorm(10, mean=20, sd=6) cat(sprintf("Probability of less than 10 = %.2f %%",100*ans1)) #> Probability of less than 10 = 4.78 % # (b) ans2 <- pnorm(32, mean=20, sd=6, lower.tail = FALSE) #or 1 - pnorm(32, mean=20, sd=6) cat(sprintf("Probability of greater than or equal to 30 = %.2f %%",100*ans2)) #> Probability of greater than or equal to 30 = 2.28 % # (c) ans3 <- qnorm(.1, mean=20, sd=6, lower.tail=FALSE) cat(sprintf("flow exceeded 10%% of the time = %.2f m^3/s",ans3)) #> flow exceeded 10% of the time = 27.69 m^3/s # plot to visualize answers x <- seq(0,40,0.1) y<- pnorm(x,mean=20,sd=6) xlbl <- expression(paste(Flow, ",", ~ m^"3"/s)) plot(x ,y ,type="l",lwd=2, xlab = xlbl, ylab= "Prob. of non-exceedance") abline(v=10,col="black", lwd=2, lty=2) abline(v=32,col="blue", lwd=2, lty=2) abline(h=0.9,col="green", lwd=2, lty=2) legend("bottomright",legend=c("(a)","(b)","(c)"),col=c("black","blue","green"), cex=0.8, lty=2) Figure 10.2: Illustration of three solutions. 10.2 Estimating floods when you have peak flow observations - flood frequency analysis For an area fortunate enough to have a long record (i.e., several decades or more) of observations, estimating flood risk is a matter of statistical data analysis. In the U.S., data, collected by the U.S. Geological Survey (USGS), can be accessed through the National Water Dashboard. Sometimes for discontinued stations it is easier to locate data through the older USGS map interface. For any site, data may be downloaded to a file, and the peakfq (watstore) format, designed to be imported into the PeakFQ software, is easy to work with in R. 10.2.1 Installing helpful packages The USGS has developed many R packages, including one for retrieval of data, dataRetrieval. Since this resides on CRAN, the package can be installed with (the use of ‘!requireNamespace’ skips the installation if it already is installed): if (!requireNamespace("dataRetrieval", quietly = TRUE)) install.packages("dataRetrieval") Other USGS packages that are very helpful for peak flow analysis are not on CRAN, but rather housed in a USGS repository. The easiest way to install packages from that archive is using the install.load package. Then the install_load command will first search the standard CRAN archive for the package, and if it is not found there the USGS archive is searched. Packages are also loaded (equivalent to using the library command). install_load also installs dependencies of packages, so here installing smwrGraphs also installs smwrBase. The prefix smwr refers to their use in support of the excellent reference Statistical Methods in Water Resources. if (!requireNamespace("install.load", quietly = TRUE)) install.packages("install.load") install.load::install_load("smwrGraphs") #this command also installs smwrBase Lastly, the lmomco package has extensive capabilities to work with many forms of probability distributions, and has functions for calculating distribution parameters (like skew) that we will use. if (!requireNamespace("lmomco", quietly = TRUE)) install.packages("lmomco") 10.2.2 Download, manipulate, and plot the data for a site Using the older USGS site mapper, and specifying that inactive stations should also be included, many stations in the south Bay Area in California are shown in Figure 10.3. Figure 10.3: Active and Inactive USGS sites recording peak flows. While the data could be downloaded and saved locally through that link, it is convenient here to use the dataRetrieval command. Qpeak_download <- dataRetrieval::readNWISpeak(siteNumbers="11169000") The data used here are also available as part of the hydromisc package, and may be obtained by typing hydromisc::Qpeak_download. It is always helpful to look at the downloaded data frame before doing anything with it. There are many columns that are not needed or that have repeated information. There are also some rows that have no data (‘NA’ values). It is also useful to change some column names to something more intuitive. We will need to define the water year (a water year begins October 1 and ends September 30). Qpeak <- Qpeak_download[!is.na(Qpeak_download$peak_dt),c('peak_dt','peak_va')] colnames(Qpeak)[colnames(Qpeak)=="peak_dt"] <- "Date" colnames(Qpeak)[colnames(Qpeak)=="peak_va"] <- "Peak" Qpeak$wy <- smwrBase::waterYear(Qpeak$Date) The data have now been simplified so that can be used more easily in the subsequent flood frequency analysis. Data should always be plotted, which can be done many ways. As a demonstration of highlighting specific years in a barplot, the strongest El Niño years (in 1930-2002) from NOAA Physical Sciences Lab can be highlighted in red. xlbl <- "Water Year" ylbl <- expression("Peak Flow, " ~ ft^{3}/s) nino_years <- c(1983,1998,1992,1931,1973,1987,1941,1958,1966, 1995) cols <- c("blue", "red")[(Qpeak$wy %in% nino_years) + 1] barplot(Qpeak$Peak, names.arg = Qpeak$wy, xlab = xlbl, ylab=ylbl, col=cols) Figure 10.4: Annual peak flows for USGS gauge 11169000, highlighting strong El Niño years in red. 10.2.3 Flood frequency analysis The general formula used for flood frequency analysis is Equation (10.1). \\[\\begin{equation} y=\\overline{y}+Ks_y \\tag{10.1} \\end{equation}\\] where y is the flow at the designated return period, \\(\\overline{y}\\) is the mean of all \\(y\\) values and \\(s_y\\) is the standard deviation. In most instances, \\(y\\) is a log-transformed flow; in the US a base-10 logarithm is generally used. \\(K\\) is a frequency factor, which is a function of the return period, the parent distribution, and often the skew of the y values. The guidance of the USGS (as in Guidelines for Determining Flood Flow Frequency, Bulletin 17C) (England, J.F. et al., 2019) is to use the log-Pearson Type III (LP-III) distribution for flood frequency data, though in different settings other distributions can perform comparably. For using the LP-III distribution, we will need several statistical properties of the data: mean, standard deviation, and skew, all of the log-transformed data, calculated as follows. mn <- mean(log10(Qpeak$Peak)) std <- sd(log10(Qpeak$Peak)) g <- lmomco::pmoms(log10(Qpeak$Peak))$skew With those calculated, a defined return period can be chosen and the flood frequency factors, from Equation (10.1), calculated for that return period (the example here is for a 50-year return period). The qnorm function from base R and the qpearsonIII function from the smwrBase package make this straightforward, and K values for Equation (10.1) are obtained for a lognormal, Knorm, and LP-III, Klp3. RP <- 50 Knorm <- qnorm(1 - 1/RP) Klp3 <- smwrBase::qpearsonIII(1-1/RP, skew = g) Now the flood frequency equation (10.1) can be applied to calculate the flows associated with the 50-year return period for each of the distributions. Remember to take the anti-log of your answer to return to standard units. Qpk_LN <- mn + Knorm * std Qpk_LP3 <- mn + Klp3 * std sprintf("RP = %d years, Qpeak LN = %.0f cfs, Qpeak LP3 = %.0f",RP,10^Qpk_LN,10^Qpk_LP3) #> [1] "RP = 50 years, Qpeak LN = 18362 cfs, Qpeak LP3 = 12396" 10.2.4 Creating a flood frequency plot Different probability distributions can produce very different results for a design flood flow. Plotting the historical observations along with the distributions, the lognormal and LP-III in this case, can help explain why they differ, and provide indications of which fits the data better. We cannot say exactly what the probability of seeing any observed flood exceeded would be. However, given a long record, the probability can be described using the general “plotting position” equation from Bulletin 17C, as in Equation (10.2). \\[\\begin{equation} p_i=\\frac{i-a}{n+1-2a} \\tag{10.2} \\end{equation}\\] where n is the total number of data points (annual peak flows in this case), \\(p_i\\) is the exceedance probability of flood observation i, where flows are ranked in descending order (so the largest observed flood has \\(i=1\\) ad the smallest is \\(i=n\\)). The parameter a is between 0 and 0.5. For simplicity, the following will use \\(a=0\\), so the plotting Equation (10.2) becomes the Weibull formula, Equation (10.3). \\[\\begin{equation} p_i=\\frac{i}{n+1} \\tag{10.3} \\end{equation}\\] While not necessary, to add probabilities to the annual flow sequence we will create a new data frame consisting of the observed peak flows, sorted in descending order. df_pp <- as.data.frame(list('Obs_peak'=sort(Qpeak$Peak,decreasing = TRUE))) This can be done with fewer commands, but here is an example where first a rank column is created (1=highest peak in the record of N years), followed by adding columns for the exceedance and non-exceedence probabilities: df_pp$rank <- as.integer(seq(1:length(df_pp$Obs_peak))) df_pp$exc_prob <- (df_pp$rank/(1+length(df_pp$Obs_peak))) df_pp$ne_prob <- 1-df_pp$exc_prob For each of the non-exceedance probabilities calculated for the observed peak flows, use the flood frequency equation (10.1) to estimate the peak flow that would be predicted by a lognormal or LP-III distribution. This is the same thing that was done above for a specified return period, but now it will be “applied” to an entire column. df_pp$LN_peak <- mapply(function(x) {10^(mn+std*qnorm(x))}, df_pp$ne_prob) df_pp$LP3_peak <- mapply(function(x) {10^(mn+std*smwrBase::qpearsonIII(x, skew=g))},df_pp$ne_prob) There are many packages that create probability plots (see, for example, the versatile scales package for ggplot2). For this example the USGS smwrGraphs package is used. First, for aesthetics, create x- and y- axis labels. ylbl <- expression("Peak Flow, " ~ ft^{3}/s) xlbl <- "Non-exceedence Probability" The smwrGraphs package works most easily if it writes output directly to a file, a PNG file in this case, using the setPNG command; the file name and its dimensions in inches are given as arguments, and the PNG device is opened for writing. This is followed by commands to plot the data on a graph. Technically, the data are plotted to an object here is called prob.pl. The probPlot command plots the observed peaks as points, where the alpha argument is the a in Equation (10.2). Additional points or lines are added with the addXY command, used here to add the LN and LP3 data as lines (one solid, one dashed). Finally, a legend is added (the USGS refers to that as an “Explanation”), and the output PNG file is closed with the dev.off() command. smwrGraphs::setPNG("probplot_smwr.png",6.5, 3.5) #> width height #> 6.5 3.5 #> [1] "Setting up markdown graphics device: probplot_smwr.png" prob.pl <- smwrGraphs::probPlot(df_pp$Obs_peak, alpha = 0.0, Plot=list(what="points",size=0.05,name="Obs"), xtitle=xlbl, ytitle=ylbl) prob.pl <- smwrGraphs::addXY(df_pp$ne_prob,df_pp$LN_peak,Plot=list(what="lines",name="LN"),current=prob.pl) prob.pl <- smwrGraphs::addXY(df_pp$ne_prob,df_pp$LP3_peak,Plot=list(what="lines",type="dashed",name="LP3"),current=prob.pl) smwrGraphs::addExplanation(prob.pl,"ul",title="") dev.off() #> png #> 2 The output won’t be immediately visible in RStudio – navigate to the file and click on it to view it. Figure 10.5 shows the output from the above commands. Figure 10.5: Probability plot for USGS gauge 11169000 for years 1930-2002. 10.2.5 Other software for peak flow analysis Much of the analysis above can be achieved using the PeakFQ software developed by the USGS. It incorporates the methods in Bulletin 17C via a graphical interface and can import data in the watstore format as discussed above in Section 10.2. The USGS has also produced the MGBT R package to perform many of the statistical calculations involved in the Bulletin 17C procedures. 10.3 Estimating floods from precipitation When extensive streamflow data are not available, flood risk can be estimated from precipitation and the characteristics of the area contributing flow to a point. While not covered here (or not yet…), there has been extensive development of hydrological modeling using R, summarized in recent papers (Astagneau et al., 2021; Slater et al., 2019). Straightforward application of methods to estimate peak flows or hydrographs resulting from design storms can by writing code to apply the Rational Formula (included in the VFS and hydRopUrban packages, for example) or the NRCS peak flow method. For more sophisticated analysis of water supply and drought, continuous modeling is required. A very good introduction to hydrological modeling in R, including model calibration and assessment, is included in the Hydroinformatics at VT reference by JP Gannon. "],["sustainability-in-design-planning-for-change.html", "Chapter 11 Sustainability in design: planning for change 11.1 Perturbing a system 11.2 Detecting changes in hydrologic data 11.3 Detecting changes in extreme events", " Chapter 11 Sustainability in design: planning for change Figure 11.1: Yearly surface temperature compared to the 20th-century average from 1880–2022, from Climate.gov All systems engineered to last more than a decade or two, so everything civil engineers work on, will need to be designed to be resilient to dramatic environmental changes. As societies respond to the impacts of a disrupted climate, demands for water, energy, housing, food, and other essential services will change. This will result in economic disruption as well. This chapter presents a few ways long-term sustainability can be considered, looking at sensitivity of systems, detection of shifts or trends, and how economic and management may respond. This is much more brief than this rich topic deserves. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 11.1 Perturbing a system when a system is perturbed it can respond in many ways. A useful classification of these was developed by (Marshall & Toffel, 2005). Figure 11.2 is an adaptation of Figure 2 from that paper. Figure 11.2: Pathways of recovery or degradation a system may take after initial perturbation. In essence, after a system is degraded, it can eventually rebound to its original condition (Type 1), rebound to some other state that is degraded from its original (Types 2 and 3), or completely collapse (Type 4). Which path it taken depends on the degree of the initial disruption and the ability of the system to recover. While originally cast with time as the x-axis, Figure 11.2 is equally applicable when looking at a system that travels over a distance, such as a flowing river. The form of the curves in Figure 11.2 appear similar to a classic dissolved oxygen sag curve, as in Figure 11.3. Figure 11.3: Dissolved oxygen levels in a steam following an input of waste (source: EPA). The Streeter-Phelps equation describes the response of the dissolved oxygen (DO) levels in a water body to a perturbation, such as the discharge of wastewater with a high oxygen demand. Some important assumptions are that steady-state conditions exist, and the flow moves as plug flow, progressing downstream along a one-dimensional path. Following is Streeter-Phelps Equation (11.1). \\[\\begin{equation} D=C_s-C=\\frac{K_1^\\prime L_0}{K_2^\\prime - K_1^\\prime}\\left(e^{-K_1^\\prime t}-e^{-K_2^\\prime t}\\right)+D_0e^{-K_2^\\prime t} \\tag{11.1} \\end{equation}\\] where \\(D\\) is the DO deficit, \\(C_s\\) is the saturation DO concentration, \\(C\\) is the DO concentration, \\(D_0\\) is the initial DO deficit, \\(L_0\\) is the ultimate (first-stage) BOD at the discharge, calculated by Equation (11.2). \\[\\begin{equation} L_0=\\frac{BOD_5}{1-e^{-K_1^\\prime t}} \\tag{11.2} \\end{equation}\\] \\(K_1^\\prime\\) and \\(K_2^\\prime\\) are the deoxygenation and reaeration coefficients, both adjusted for temperature. Usually the coefficients \\(K_1\\) and \\(K_2\\) are defined at 20\\(^\\circ\\)C, and then adjusted by empirical relationships for the actual water temperature using Equation (11.3). \\[\\begin{equation} K^\\prime = K\\theta ^{T-20} \\tag{11.3} \\end{equation}\\] where \\(\\theta\\) is set to typical values of 1.135 for \\(K_1\\) for \\(T\\le20^\\circ C\\) (and 1.056 otherwise) and 1.024 for \\(K_2\\). As a demonstration, functions (only available for SI units) in the hydromisc package can be used to explore the recovery of an aquatic system from a perturbation, as in Example 11.1. Example 11.1 A river with a flow of 7 \\(m^3/s\\) and a velocity of 1.4 m/s has effluent discharged into it at a rate of 1.5 \\(m^3/s\\). The river upstream of the discharge has a temperature of 15\\(^\\circ\\)C, a \\(BOD_5\\) of 1 mg/L, and a dissolved oxygen saturation of 90 percent. The effluent is 21\\(^\\circ\\)C with a \\(BOD_5\\) of 180 mg/L and a dissolved oxygen saturation of 0 percent. The deoxygenation rate constant (at 20\\(^\\circ\\)C) is 0.4 \\(d^{-1}\\), and the reaeration rate constant is 0.8 \\(d^{-1}\\). Create a plot of DO as a percent of saturation (y-axis) vs. distance in km (x-axis). First set up the model parameters. Q <- 7 # flow of stream, m3/s V <- 1.4 # velocity of stream, m/s Qeff <- 1.5 # flow rate of effluent, m3/s DOsatupstr <- 90 # DO saturation upstream of effluent discharge, % DOsateff <- 0 # DO saturation of effluent discharge, % Triv <- 15 # temperature of receiving water, C Teff <- 21 # temperature of effluent, C BOD5riv <- 1 # 5-day BOD of receiving water, mg/L BOD5eff <- 180 # 5-day BOD of effluent, mg/L K1 <- 0.4 # deoxygenation rate constant at 20C, 1/day K2 <- 0.8 # reaeration rate constant at 20C, 1/day Calculate some of the variables needed for the Streeter-Phelps model. Type ?hydromisc::DO_functions for more information on the DO-related functions in the hydromisc package. Tmix <- hydromisc::Mixture(Q, Triv, Qeff, Teff) K1adj <- hydromisc::Kadj_deox(K1=K1, T=Tmix) K2adj <- hydromisc::Kadj_reox(K2=K2, T=Tmix) BOD5mix <- hydromisc::Mixture(Q, BOD5riv, Qeff, BOD5eff) L0 <- BOD5mix/(1-exp(-K1adj*5)) #BOD5 - ultimate Find the dissolved oxygen for 100 percent saturation (assuming no salinity) and the initial DO deficit at the point of discharge. Cs <- hydromisc::O2sat(Tmix) #DO saturation, mg/l C0 <- hydromisc::Mixture(Q, DOsatupstr/100.*Cs, Qeff, DOsateff/100.*Cs) #DO init, mg/l D0 <- Cs - C0 #initial deficit Determine a set of distances where the DO deficit will be calculated, and the corresponding times for the flow to travel that distance. xs <- seq(from=0.5, to=800, by=5) ts <- xs*1000/(V*86400) Finally, calculate the DO (as a percent of saturation) and plot the results. DO_def <- hydromisc::DOdeficit(t=ts, K1=K1adj, K2=K2adj, L0=L0, D0=D0) DO_mgl <- Cs - DO_def DO_pct <- 100*DO_mgl/Cs plot(xs,DO_pct,xlim=c(0,800),ylim=c(0,100),type="l",xlab="Distance, km",ylab="DO, %") grid() Figure 11.4: Dissolved oxygen for this example. For this example, the saturation DO concentration is 9.9 mg/L, meaning the minimum value of the curve corresponds to about 4 mg/L. The EPA notes that values this low are below that recommended for the protection of aquatic life in freshwater. This shows that while the ecosystem has not collapsed, (i.e., following a Type 4 curve in Figure 11.2), effective ecosystem functions may be lost. 11.2 Detecting changes in hydrologic data Planning for decades or more requires the ability to determine whether changes are occurring or have already occurred. Two types of changes will be considered here: step changes, caused by an abrupt change such as deforestation or a new pollutant source, and monotonic (either always increasing or decreasing) trends, caused by more gradual shifts. these are illustrated in Figures 11.5 and 11.6. Figure 11.5: A shift in phosphorus concentrations (source: USGS Scientific Investigations Report 2017-5006, App. 4, https://doi.org/10.3133/sir20175006). Figure 11.6: A trend in annual peak streamflow. (source: USGS Professional Paper 1869, https://doi.org/10.3133/pp1869). Before performing calculations related to trend significance, refer to Chapter 4 of Statistical Methods in Water Resources (Helsel, D.R. et al., 2020) to review the relationship between hypothesis testing and statistical significance. Figure 11.7 from that reference illustrates this. Figure 11.7: Four possible results of hypothesis testing. (source: Helsel et al., 2020). In the context of the example that follows, the null hypothesis, H0, is usually a statement that no trend exists. The \\(\\alpha\\)-value (the significance level) is the probability of incorrectly rejecting the null hypothesis, that is rejecting H0 when it is in fact true. The significance level that is acceptable is a decision that must be made – a common value of \\(\\alpha\\)=0.05 (5 percent) significance, also referred to as \\(1-\\alpha=0.95\\) (95 percent) confidence. A statistical test will produce a p-value, which is essentially the likelihood that the null hypothesis is true, or more technically, the probability of obtaining the calculated test statistic (on one more extreme) when the null hypothesis is true. Again, in the context of trend detection, small p-values (less than \\(\\alpha\\)) indicate greater confidence for rejecting the null hypothesis and thus supporting the existence of a “statistically significant” trend. One of the most robust impacts of a warming climate is the impact on snow. In California, historically the peak of snow accumulation tended to occur roughly on April 1 on average. To demonstrate methods for detecting changes data from the Four Trees Cooperative Snow Sensor site in California, obtained from the USDA National Water and Climate Center. These data are available as part of the hydromisc package. swe <- hydromisc::four_trees_swe plot(swe$Year, swe$April_1_SWE_in, xlab="Year", ylab="April 1 Snow Water Equivalent, in") lines(zoo::rollmean(swe, k=5), col="blue", lty=3, cex=1.4) Figure 11.8: April 1 snow water equivalent at Four Trees station, CA. The dashed line is a 5-year moving average. A plot is always useful – here a 5-year moving average, or rolling mean, is added (using the zoo package), to make any trends more observable. 11.2.1 Detecting a step change When there is a step change in a record, you need to test that the difference between the “before” and “after” conditions is large enough relative to natural variability that is can be confidently described as a change. In other words, whether the change is significant must be determined. This is done by breaking the data into two-samples and applying a statistical test, such as a t-test or the nonparametric rank-sum (or Mann-Whitney U) test. While for this example there is no obvious reason to break this data at any particular year, we’ll just look at the first and second halves. Separate the two subsets of years into two arrays of (y) values (not data frames in this case) and then create a boxplot of the two periods. yvalues1 <- swe$April_1_SWE_in[(swe$Year >= 1980) & (swe$Year <= 2001)] yvalues2 <- swe$April_1_SWE_in[(swe$Year >= 2002) & (swe$Year <= 2023)] boxplot(yvalues1,yvalues2,names=c("1980-2001","2002-2023"),boxwex=0.2,ylab="swe, in") Figure 11.9: Comparison of two records of SWE at Four Trees station, CA. Calculate the means and medians of the two periods, just for illustration. mean(yvalues1) #> [1] 19.76364 mean(yvalues2) #> [1] 15.44545 median(yvalues1) #> [1] 17.8 median(yvalues2) #> [1] 7.9 The mean for the later period is lower, as is the median. the question to pose is whether these differences are statistically significant. The following tests allow that determination. 11.2.1.1 Method 1: Using a t-test. A t-test determines the significance of a difference in the mean between two samples under a number of assumptions. These include independence of each data point (in this example, that any year’s April 1 SWE is uncorrelated with prior years) and that the data are normally distributed. This is performed with the t.test function. The alternative argument is included that the test is “two sided”; a one-sided test would test for one group being only greater than or less than the other, but here we only want to test whether they are different. The paired argument is set to FALSE since there is no correspondence between the order of values in each subset of years. t.test(yvalues1, yvalues2, var.equal = FALSE, alternative = "two.sided", paired = FALSE) #> #> Welch Two Sample t-test #> #> data: yvalues1 and yvalues2 #> t = 0.91863, df = 40.084, p-value = 0.3638 #> alternative hypothesis: true difference in means is not equal to 0 #> 95 percent confidence interval: #> -5.181634 13.817998 #> sample estimates: #> mean of x mean of y #> 19.76364 15.44545 Here the p-value is 0.36, a value much greater than \\(\\alpha = 0.05\\), so the null hypothesis cannot be rejected. The difference in the means is not significant based on this test. 11.2.1.2 Method 2: Wilcoxon rank-sum (or Mann-Whitney U) test. Like the t-test, the rank-sum test produces a p-value, but it measures a more generic measure of “central tendency” (such as a median) rather than a mean. Assumptions about independence of data are still necessary, but there is no requirement of normality of the distribution of data. It is less affected by outliers or a few extreme values than the t-test. This is performed with a standard R function. Other arguments are set as with the t-test. wilcox.test(yvalues1, yvalues2, alternative = "two.sided", paired=FALSE) #> Warning in wilcox.test.default(yvalues1, yvalues2, alternative = "two.sided", : #> cannot compute exact p-value with ties #> #> Wilcoxon rank sum test with continuity correction #> #> data: yvalues1 and yvalues2 #> W = 297, p-value = 0.1999 #> alternative hypothesis: true location shift is not equal to 0 The p-value is much lower than with the t-test, showing less influence of the two very high SWE values in the second half of the record. 11.2.2 Detecting a monotonic trend In a similar way to the step change, a monotonic trend can be tested using parameteric or non-parametric methods. Here we use the entire record to detect trends over the entire period. Linear regression may be used as a parametric method, which makes assumptions similar to the t-test (that residuals of the data are normally distributed). If the data do not conform to a normal distribution, the Mann-Kendall test can be applied, which is a non-parametric test. 11.2.2.1 Method 1: Regression To perform a linear regression in R, build a linear regression model (lm). This can take the swe data frame as input data, specifying the columns to relate linearly. m <- lm(April_1_SWE_in ~ Year, data = swe) summary(m)$coefficients #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 347.3231078 370.6915171 0.9369600 0.3541359 #> Year -0.1647357 0.1852031 -0.8894868 0.3788081 The row for “Year” provides the data on the slope. The slope shows SWE declines by 0.16 inches/year based on regression. The p-value for the slope is 0.379, much larger than the typical \\(\\alpha\\), meaning we cannot claim that a significant slope exists based on this test. So while a declining April 1 snowpack is observed at this location, it is not outside of the natural variability of the data based on a regression analysis. 11.2.2.2 Method 2: Mann-Kendall To conduct a Mann-Kendall trend test, additional packages need to be installed. There are a number available; what is shown below is one method. A non-parametric trend test (and plot) requires a few extra packages, which are installed like this: if (!require('Kendall', quietly = TRUE)) install.packages('Kendall') #> Warning: package 'Kendall' was built under R version 4.2.3 if (!require('zyp', quietly = TRUE)) install.packages('zyp') #> Warning: package 'zyp' was built under R version 4.2.3 Now the significance of the trend can be calculated. The slope associated with this test, the “Thiel-Sen slope”, is calculated using the zyp package. mk <- Kendall::MannKendall(swe$April_1_SWE_in) summary(mk) #> Score = -99 , Var(Score) = 9729 #> denominator = 934.4292 #> tau = -0.106, 2-sided pvalue =0.32044 ss <- zyp::zyp.sen(April_1_SWE_in ~ Year, data=swe) ss$coefficients #> Intercept Year #> 291.1637542 -0.1385452 The non-parametric slope shows April 1 SWE declining by 0.14 inches per year over the period. Again, however, the p-value is greater than the typical \\(\\alpha\\), so based on this method the trend is not significantly different from zero. As with the tests for a step change, the p-value is lower for the nonparametric test. A summary plot of the slopes of both methods is helpful. plot(swe$Year,swe$April_1_SWE_in, xlab = "Year",ylab = "Snow water equivalent, in") lines(swe$Year,m$fitted.values, lty=1, col="black") abline(a = ss$coefficients["Intercept"], b = ss$coefficients["Year"], col="red", lty=2) legend("topright", legend=c("Observations","Regression","Thiel-Sen"), col=c("black","black","red"),lty = c(NA,1,2), pch = c(1,NA,NA), cex=0.8) Figure 11.10: Trends of SWE at Four Trees station, CA. 11.2.3 Choosing whether to use parametric or non-parametric tests Using the parameteric tests above (t-test, regression) requires making an assumption about the underlying distribution of the data, which non-parametric tests do not require. When using a parametric test, the assumption of normality can be tested. For example, for the regression residuals can be tested with the following, where the null hypothesis is that the data are nomally distributed. shapiro.test(m$residuals)$p.value #> [1] 0.003647395 This produces a very small p-value (p < 0.01), meaning the null hypothesis that the residuals are normally distributed is rejected with >99% confidence. This means non-parametric test is more appropriate. In general, non-parametric tests are preferred in hydrologic work because data (and residuals) are rarely normally distributed. 11.3 Detecting changes in extreme events when looking at extreme events like the 100-year high tide, the methods are similar to those used in flood frequency analysis. One distinction is that flood frequency often uses a Gumbel or Log-Pearson type 3 distribution. For sea-level rise (and many other extreme events) other distributions are employed, with one common one being the Generalized Extreme Value (GEV), the cumulative distribution of which is described by Equation (11.4). \\[\\begin{equation} F\\left(x;\\mu,\\sigma,\\xi\\right)=exp\\left[-\\left(1+\\xi\\left(\\frac{x-\\mu}{\\sigma}\\right)\\right)^{-1/\\xi}\\right] \\tag{11.4} \\end{equation}\\] The three parameters \\(\\xi\\), \\(\\mu\\), and \\(\\sigma\\) represent a shape, location, and scale of the distribution function. These distribution parameter can be determined using observations of extremes over a long period or over different periods of record, much as the mean, standard, deviation, and skew are used in flood frequency calculations. The distribution can then be used to estimate the probability associated with a specific magnitude event, or conversely the event magnitude associated with a defined risk level. An excellent example of that is from Tebaldi et al. (2012) who analyzed projected extreme sea level changes through the 21st century. Figure 11.11: Projected return periods by 2050 for floods that are 100 yr events during 1959–2008, Tebaldi et al., 2012 An example using the GEV with sea level data is illustrated below. The Tebaldi et al. (2012) paper uses the R package extRemes, which we will use here. The same package has been used to study extreme wind, precipitation, temperature, streamflow, and other events, so it is a very versatile and widely-used package. Install the package if it is not already installed. if (!require('extRemes', quietly = TRUE)) install.packages('extRemes') #> Warning: package 'extRemes' was built under R version 4.2.3 #> Warning: package 'Lmoments' was built under R version 4.2.2 11.3.1 Obtaining and preparing sea-level data Sea-level data can be downloaded directly into R using the rnoaa package. However, NOAA also has a very intuitive interface that allows geographical searching and preliminary viewing of data. From the NOAA Tines & Currents site one can search an area of interest and find a tide gauge with a long record. Figure 11.12. Figure 11.12: Identification of a sea-level gauge on the NOAA Tides & Currents site. By exploring the data inventory for this station, on its home page, the gauge has a very long record, being established in 1854, with measurement of extremes for over a century. Avoid selecting a partial month, or you may not have the ability to download monthly data. Monthly data were downloaded and saved as a csv file, which is available with the hydromisc package. datafile <- system.file("extdata", "sealevel_9414290_wl.csv", package="hydromisc") dat <- read.csv(datafile,header=TRUE) These data were saved in metric units, so all levels are in meters above the selected tidal datum. there are dates indicating the month associated with each value (and day 1 is in there as a placeholder). If there are any missing data they may be labeled as “NaN”. If you see that, a clean way to address it is to first change the missing data to NA (which R recognizes) with a command such as dat[dat == "NaN"] <- NA For this example we are looking at extreme tide levels, so only retain the “Highest” and “Date” columns. peak_sl <- subset(dat, select=c("Date", "Highest")) A final data preparation is to create an annual time series with the the maximum tide level in any year. One way to facilitate this is to add a column of “year.” Then the data can be aggregated by year, creating a new data frame, taking the maximum value for each year (many other functions, like mean, median, etc. can also be used). In this example the column names are changed to make it easier to work with the data. Also, the year column is converted to an integer for plotting purposes. Any rows with NA values are removed. peak_sl$year <- as.integer(strftime(peak_sl$Date, "%Y")) peak_sl_ann <- aggregate(peak_sl$Highest,by=list(peak_sl$year),FUN=max, na.rm=TRUE) colnames(peak_sl_ann) <- c("year","peak_m") peak_sl_ann <- na.exclude(peak_sl_ann) A plot is always helpful. plot(peak_sl_ann$year,peak_sl_ann$peak_m,xlab="Year",ylab="Annual Peak Sea Level, m") Figure 11.13: Annual highest sea-levels relative to MLLW at gauge 9414290. 11.3.2 Conducting the extreme event analysis The question we will attempt to address is whether the 100-year peak tide level (the level exceeded with a 1 percent probability) has increased between the 1900-1930 and 1990-2020 periods. Extract a subset of the data for one period and fit a GEV distribution to the values. peak_sl_sub1 <- subset(peak_sl_ann, year >= 1900 & year <= 1930) gevfit1 <- extRemes::fevd(peak_sl_sub1$peak_m) gevfit1$results$par #> location scale shape #> 2.0747606 0.1004844 -0.2480902 A plot of return periods for the fit distribution is available as well. extRemes::plot.fevd(gevfit1, type="rl") Figure 11.14: Return periods based on the fit GEV distribution for 1900-1930. Points are observations; dashed lines enclose the 95% confidence interval. As is usually the case, a statistical model does well in the area with observations, but the uncertainty increases for extreme values (like estimating a 500-year event from a 30-year record). A longer record produces better (less uncertain) estimates at higher return periods. Based on the GEV fit, the 100-year recurrence interval extreme tide is determined using: extRemes::return.level(gevfit1, return.period = 100, do.ci = TRUE, verbose = TRUE) #> #> Preparing to calculate 95 % CI for 100-year return level #> #> Model is fixed #> #> Using Normal Approximation Method. #> extRemes::fevd(x = peak_sl_sub1$peak_m) #> #> [1] "Normal Approx." #> #> [1] "100-year return level: 2.35" #> #> [1] "95% Confidence Interval: (2.2579, 2.4429)" A check can be done using the reverse calculation, estimating the return period associated with a specified value of highest water level. This can be done by extracting the three GEV parameters, then running the pevd command. loc <- gevfit1$results$par[["location"]] sca <- gevfit1$results$par[["scale"]] shp <- gevfit1$results$par[["shape"]] extRemes::pevd(2.35, loc = loc, scale = sca , shape = shp, type = c("GEV")) #> [1] 0.9898699 This returns a value of 0.99 (this is the CDF value, or the probability of non-exceedence, F). Recalling that return period, \\(T=1/P=1/(1-F)\\), where P=prob. of exceedence; F=prob. of non-exceedence, the result that 2.35 meters is the 100-year highest water level is validated. Repeating the calculation for a more recent period: peak_sl_sub2 <- subset(peak_sl_ann, year >= 1990 & year <= 2020) gevfit2 <- extRemes::fevd(peak_sl_sub2$peak_m) extRemes::return.level(gevfit2, return.period = 100, do.ci = TRUE, verbose = TRUE) #> #> Preparing to calculate 95 % CI for 100-year return level #> #> Model is fixed #> #> Using Normal Approximation Method. #> extRemes::fevd(x = peak_sl_sub2$peak_m) #> #> [1] "Normal Approx." #> #> [1] "100-year return level: 2.597" #> #> [1] "95% Confidence Interval: (2.3983, 2.7957)" This returns a 100-year high tide of 2.6 m for 1990-2020, a 10.6 % increase over 1900-1930. Another way to look at this is to find out how the frequency of the past (in this case, 1900-1930) 100-year event has changed with rising sea levels. Repeating the calculations from before to capture the GEV parameters for the later period, and then plugging in the 100-year high tide from the early period: loc2 <- gevfit2$results$par[["location"]] sca2 <- gevfit2$results$par[["scale"]] shp2 <- gevfit2$results$par[["shape"]] extRemes::pevd(2.35, loc = loc2, scale = sca2 , shape = shp2, type = c("GEV")) #> [1] 0.7220968 This returns a value of 0.72 (72% non-exceedence, or 28% exceedance, in other words we expect to see an annual high tide of 2.35 m or higher in 28% of the years). The return period of this is calculated as above: T = 1/(1-0.72) = 3.6 years. So, what was the 100-year event in 1900-1930 is about a 4-year event now. "],["management-of-water-resources-systems.html", "Chapter 12 Management of water resources systems 12.1 A simple linear system with two decision variables 12.2 More complex linear programming: reservoir operation 12.3 More Realistic Reservoir Operation: non-linear programming", " Chapter 12 Management of water resources systems Figure 12.1: Lookout Point Dam on the Middle Fork Willamette River source: U.S. Army Corps of Engineers Water resources systems tend to provide a variety of benefits, such as flood control, hydroelectric power, recreation, navigation, and irrigation. Each of these provides a benefit that can quantified, and there are also associated costs that can be quantified. A challenge engineers face is how to manage a system to balance the different uses. Mathematical optimization, which can take many forms, is employed to do this. Introductions to linear programming and other forms of optimization are plentiful. For a background on the concepts and theories, refer to other references. An excellent, comprehensive reference is Water Resource Systems Planning and Management (Loucks & Van Beek, 2017), freely available online. What follows is a demonstration of using some of these optimization methods, but no recap of the theory is provided. The examples here use linear systems, where the objective function and constraints are all linear functions of the decision variables. The hydromisc package will need to be installed to access some of the data used below. If it is not installed, do so following the instructions on the github site for the package. 12.1 A simple linear system with two decision variables 12.1.1 Overview of problem formulation One of the simplest systems to optimize is a linear system of two variables, which means a graphical solution in 2-d is possible. This first demonstration is one example, a reworking of a textbook example (Wurbs & James, 2002). To set up a solution, three things must be described: the decision variables – the variables for which optimal values are sought the constraints – physical or other limitations on the decision variables (or combinations of them) the objective function – an expression, using the decision variables, of what is to be minimized or maximized. 12.1.2 Setting up an example Example 12.1 To supply water to a community, there are two sources of water available with different levels of total dissolved solids (TDS): groundwater (TDS=980 mg/l) and surface water from a reservoir (TDS=100 mg/l). The first two constraints are that a total demand of water of 7,500 m\\(^3\\)must be met, and the delivered water (mixed groundwater and reservoir supplies) can have a maximum TDS of 500 mg/l. This is illustrated in Figure 12.2. Figure 12.2: A schematic of the system for this example. Two additional constraints are that groundwater withdrawal cannot exceed 4,000 m\\(^3\\) and reservoir withdrawals cannot exceed 7,500 m\\(^3\\). There are two decision variables: X1=groundwater and X2=reservoir supply. The objective is to minimize the reservoir withdrawal while meeting the constraints. The TDS constraint is reorganized as: \\[\\frac{800~X1+100~X2}{X1+X2}\\le 400~~~or~~~ 400~X1-300~X2\\le 0\\] Rewriting the other three constraints as functions of the decision variables: \\[\\begin{align*} X1+X2 \\ge 7500 \\\\ X1 \\le 4000 \\\\ X2 \\le 7500 \\end{align*}\\] Notice that the contraints are all expressed as linear functions of the decision variables (left side of the equations) and a value on the right. 12.1.3 Graphing the solution space While this can only be done easily for systems with only two decision variables, a plot of the solution space can be done here by graphing all of the constrains and shading the region where all constraints are satisfied. Figure 12.3: The solution space, shown as the cross-hatched area. In the feasible region, it is clear that the minimum reservoir supply, X2, would be a little larger than 4,000 m\\(^3\\). 12.1.4 Setting up the problem in R An R package useful for solving linear programming problems is the lpSolveAPI package. Install that if necessary, and also install the knitr and kableExtra packages, since thet are very useful for printing the many tables that linear programming involves. Begin by creating an empty linear model. The (0,2) means zero constraints (they’ll be added later) and 2 decision variables. The next two lines just assign names to the decision variables. Because we will use many functions of the lpSolveAPI package, load the library first. Load the kableExtra package too. library(lpSolveAPI) library(kableExtra) example.lp <- lpSolveAPI::make.lp(0,2) # 0 constraints and 2 decision variables ColNames <- c("X1","X2") colnames(example.lp) <- ColNames # Set the names of the decision variables Now set up the objective function. Minimization is the default goal of this R function, but we’ll set it anyway to be clear. The second argument is the vector of coefficients for the decision variables, meaning X2 is minimized. set.objfn(example.lp,c(0,1)) x <- lp.control(example.lp, sense="min") #save output to a dummy variable The next step is to define the constraints. Four constraints were listed above. Additional constraints could be added that \\(X1\\ge 0\\) and \\(X2\\ge 0\\), however, variable ranges in this LP solver are [0,infinity] by default, so for this example and we do not need to include constraints for positive results. If necessary, decision variable valid ranges can be set using set.bounds(). Constraints are defined with the add.constraint command. Figure 12.4 provides an annotated example of the use of an add.constraint command. Figure 12.4: Annotated example of an add.constraint command. Type ?add.constraint in the console for additional details. The four constraints for this example are added with: add.constraint(example.lp, xt=c(400,-300), type="<=", rhs=0, indices=c(1,2)) add.constraint(example.lp, xt=c(1,1), type=">=", rhs=7500) add.constraint(example.lp, xt=c(1,0), type="<=", rhs=4000) add.constraint(example.lp, xt=c(0,1), type="<=", rhs=7500) That completes the setup of the linear model. You can view the model to verify the values you entered by typing the name of the model. example.lp #> Model name: #> X1 X2 #> Minimize 0 1 #> R1 400 -300 <= 0 #> R2 1 1 >= 7500 #> R3 1 0 <= 4000 #> R4 0 1 <= 7500 #> Kind Std Std #> Type Real Real #> Upper Inf Inf #> Lower 0 0 If it has a large number of decision variables it only prints a summary, but in that case you can use write.lp(example.lp, \"example_lp.txt\", \"lp\") to create a viewable file with the model. Now the model can be solved. solve(example.lp) #> [1] 0 If the solver finds an optimal solution it will return a zero. 12.1.5 Interpreting the optimal results View the final value of the objective function by retrieving it and printing it: optimal_solution <- get.objective(example.lp) print(paste0("Optimal Solution = ",round(optimal_solution,2),sep="")) #> [1] "Optimal Solution = 4285.71" For more detail, recover the values of each of the decision variables. vars <- get.variables(example.lp) Next you can print the sensitivity report – a vector of M constraints followed by N decision variables. It helps to create a data frame for viewing and printing the results. Nicer printing is achieved using the kable and kableExtra functions. sens <- get.sensitivity.obj(example.lp)$objfrom results1 <- data.frame(variable=ColNames,value=vars,gradient=as.integer(sens)) kbl(results1, booktabs = TRUE) %>% kable_styling(full_width = F) variable value gradient X1 3214.286 -1 X2 4285.714 0 The above shows decision variable values for the optimal solution. The Gradient is the change in the objective function for a unit increase in the decision variable. Here a negative gradient for decision variable \\(X1\\), the groundwater withdrawal, means that increasing the groundwater withdrawal will have a negative effect on the objective function, (to minimize \\(X2\\)): that is intuitive, since increasing groundwater withdrawal can reduce reservoir supply on a one-to-one basis. To look at which constraints are binding, retrieve the $duals part of the output. m <- length(get.constraints(example.lp)) #number of constraints duals <- get.sensitivity.rhs(example.lp)$duals[1:m] results2 <- data.frame(constraint=c(seq(1:m)),multiplier=duals) kbl(results2, booktabs = TRUE) %>% kable_styling(full_width = F) constraint multiplier 1 -0.0014286 2 0.5714286 3 0.0000000 4 0.0000000 The multipliers for each constraint are referred to as Lagrange multipliers (or shadow prices). Non-zero values of the multiplier indicate a binding capability of that constraint, and the change in the objective function that would result from a unit change in that value. Zero values are non-binding, since a unit change in their value has no effect on the optimal result. For example, constraint 3, that \\(X1 \\le 4000\\), with a multiplier of zero, could be changed (at least a small amount – there can be a limit after which it can become binding) with no effect on the optimal solution. Similarly, if constraint 2, \\(X1+X2 \\ge 7500\\), were were increased, the objective function (the optimal reservoir supply) would also increase. 12.2 More complex linear programming: reservoir operation Water resources systems are far too complicated to be summarized by two decision variables and only a few constraints, as above. Example 12.2 demonstrate how the same procedure can be applied to a slightly more complex system. This is a reformulation of an example from the same text as referenced above (Wurbs & James, 2002). Example 12.2 A river flows into a storage reservoir where the operator must decide how much water to release each month. For simplicity, inflows will by described by a fixed sequence of 12 monthly flows. There are two downstream needs to satisfy: hydropower generation and irrigation diversions. Benefits are derived from these two uses: revenues are $ 800 per 10\\(^6\\)m\\(^3\\) of water diverted for irrigation, and $ 350 per 10\\(^6\\)m\\(^3\\) for hydropower generation. The objective is to determine the releases that will maximize the total revenue. There are physical characteristics of the system that provide some constraints, and others are derived from basic physics, such as the conservation of mass. A schematic of the system is shown in Figure 12.5. Figure 12.5: A schematic of the water resources system for this example. Diversions through the penstock to the hydropower facility are limited to its capacity of 160 10\\(^6\\)m\\(^3\\)/month. For reservoir releases less than that, all of the released water can generate hydropower; flows above that capacity will spill without generating hydropower benefits. The reservoir has a volume of 550 10\\(^6\\)m\\(^3\\), so anything above that will have to be released. Assume the reservoir is at half capacity initially. The irrigation demand varies by month, and diversions up to the demand will produce benefits. These are: Month Demand, 10\\(^6\\)m\\(^3\\) Month Demand, 10\\(^6\\)m\\(^3\\) Month Demand, 10\\(^6\\)m\\(^3\\) Jan (1) 0 May (5) 40 Sep (9) 180 Feb (2) 0 Jun (6) 130 Oct (10) 110 Mar (3) 0 Jul (7) 230 Nov (11) 0 Apr (4) 0 Aug (8) 250 Dec (12) 0 12.2.1 Problem summary There are 48 decision variables in this problem, 12 monthly values for reservoir storage (s\\(_1\\)-s\\(_{12}\\)), release (r\\(_1\\)-r\\(_{12}\\)), hydropower generation (h\\(_1\\)-h\\(_{12}\\)), and agricultural diversion (d\\(_1\\)-d\\(_{12}\\)). The objective function is to maximize the revenue, which is expressed by Equation (12.1). \\[\\begin{equation} Maximize~ x_0=\\sum_{i=1}^{12}\\left(350h_i+800d_i\\right) \\tag{12.1} \\end{equation}\\] Constraints will need to be described to apply the limits to hydropower diversion and storage capacity, and to limit agricultural diversions to no more than the demand. 12.2.2 Setting up the problem in R Create variables for the known or assumed initial values for the system. penstock_cap <- 160 #penstock capacity in million m3/month res_cap <- 550 #reservoir capacity in million m3 res_init_vol <- res_cap/2 #set initial reservoir capacity equal to half of capacity irrig_dem <- c(0,0,0,0,40,130,230,250,180,110,0,0) revenue_water <- 800 #revenue for delivered irrigation water, $/million m3 revenue_power <- 350 #revenue for power generated, $/million m3 A time series of 20 years (January 2000 through December 2019) of monthly flows for this exercise is included with the hydromisc package. Load that and extract the first 12 months to use in this example. inflows_20years <- hydromisc::inflows_20years inflows <- as.numeric(window(inflows_20years, start = c(2000, 1), end = c(2000, 12))) It helps to illustrate how the irrigation demands and inflows vary, and therefore why storage might be useful in regulating flow to provide more reliable irrigation deliveries. par(mgp=c(2,1,0)) ylbl <- expression(10 ^6 ~ m ^3/month) plot(inflows, type="l", col="blue", xlab="Month", ylab=ylbl) lines(irrig_dem, col="darkgreen", lty=2) legend("topright",c("Inflows","Irrigation Demand"),lty = c(1,2), col=c("blue","darkgreen")) grid() Figure 12.6: Inflows and irrigation demand. 12.2.3 Building the linear model Following the same steps as for a simple 2-variable problem, begin by setting up a linear model. Because there are so many decision variables, it helps to add names to them. reser.lp <- make.lp(0,48) DecisionVarNames <- c(paste0("s",1:12),paste0("r",1:12),paste0("h",1:12),paste0("d",1:12)) colnames(reser.lp) <- DecisionVarNames From this point on, the decision variables will be addressed by their indices, that is, their numeric position in this sequence of 48 values. To summarize their positions: Decision Variables Indices (columns) Storage (s1-s12) 1-12 Release (r1-r12) 13-24 Hydropower (h1-h12) 25-36 Irrigation diversion (d1-d12) 37-48 Using these indices as a guide, set up the objective function and initialize the linear model. While not necessary, redirecting the output of the lp.control to a variable prevents a lot of output to the console. The following takes the revenue from hydropower and irrigation (in $ per 10\\(^6\\)m\\(^3\\)/month), multiplies them by the 12 monthly values for the hydropower flows and the irrigation deliveries, and sets the objective to maximize their sum, as described by Equation (12.1). set.objfn(reser.lp,c(rep(revenue_power,12),rep(revenue_water,12)),indices = c(25:48)) x <- lp.control(reser.lp, sense="max") With the LP setup, the constraints need to be applied. Negative releases, storage, or river flows don’t make sense, so they all need to be positive, so \\(s_t\\ge0\\), \\(r_t\\ge0\\), \\(h_t\\ge0\\) for all 12 months, but because the lpSolveAPI package assumes all decision variables have a range of \\(0\\le x\\le \\infty\\) these do not need to be explicitly added as constraints. When using other software packages these may need to be included. 12.2.3.1 Constraints 1-12: Maximum storage The maximum capacity of the reservoir cannot be exceeded in any month, or \\(s_t\\le 600\\) for all 12 months. This can be added in a simple loop: for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1), type="<=", rhs=res_cap, indices=c(i)) } 12.2.3.2 Constraints 13-24: Irrigation diversions The irrigation diversions should never exceed the demand. While for some months they are set to zero, since decision variables are all assumed non-negative, we can just assign all irrigation deliveries using the \\(\\le\\) operator. for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1), type="<=", rhs=irrig_dem[i], indices=c(i+36)) } 12.2.3.3 Constraints 25-36: Hydropower Hydropower release cannot exceed the penstock capacity in any month: \\(h_t\\le 180\\) for all 12 months. This can be done following the example above for the maximum storage constraint for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1), type="<=", rhs=penstock_cap, indices=c(i+24)) } 12.2.3.4 Constraints 37-48: Reservoir release Reservoir release must equal or exceed irrigation deliveries, which is another way of saying that the water remaining in the river after the diversion cannot be negative. In other words \\(r_1-d_1\\ge 0\\), \\(r_2-d_2\\ge 0\\), … for all 12 months. For constraints involving more than one decision variable the constraint equations look a little different, and keeping track of the indices associated with each decision variable is essential. for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1,-1), type=">=", rhs=0, indices=c(i+12,i+36)) } 12.2.3.5 Constraints 49-60: Hydropower Hydropower generation will be less than or equal to reservoir release in every month, or \\(r_1-h_1\\ge 0\\), \\(r_2-h_2\\ge 0\\), … for all 12 months. for (i in seq(1,12)) { add.constraint(reser.lp, xt=c(1,-1), type=">=", rhs=0, indices=c(i+12,i+24)) } 12.2.3.6 Constraints 61-72: Conservation of mass Finally, considering the reservoir, the inflow minus the outflow in any month must equal the change in storage over that month. That can be expressed in an equation with decision variables on the left side as: \\[s_t-s_{t-1}+r_t=inflow_t\\] where \\(t\\) is a month from 1-12 and \\(s_t\\) is the storage at the end of month \\(t\\). We need to use the initial reservoir volume, \\(s_0\\) (given above in the problem statement) for the first month’s mass balance, so the above would become \\(s_1-s_0+r_1=inflow_1\\), or \\(s_1+r_1=inflow_1+s_0\\). All subsequent months can be assigned in a loop, add.constraint(reser.lp, xt=c(1,1), type="=", rhs=inflows[1]+res_init_vol, indices=c(1,13)) for (i in seq(2,12)) { add.constraint(reser.lp, xt=c(1,-1,1), type="=", rhs=inflows[i], indices=c(i,i-1,i+12)) } This completes the LP model setup. Especially for larger models, it is helpful to save the model. You can use something like write.lp(reser.lp, \"reservoir_LP.txt\", \"lp\") to create a file (readable using any text file viewer, like Notepad++) with all of the model details. It can also be read into R with the read.lp command to load the complete LP. The beginning of the file for this LP looks like: Figure 12.7: The top of the linear model file produced by write.lp(). 12.2.3.7 Solving the model and interpreting output Solve the LP and retrieve the value of the objective function. solve(reser.lp) #> [1] 0 get.objective(reser.lp) #> [1] 1230930 To look at the hydropower generation, and to see how often spill occurs, it helps to view the associated decision variables (as noted above, these are indices 12-24 and 25-36). vars <- get.variables(reser.lp) # retrieve decision variable values results0 <- data.frame(variable=DecisionVarNames,value=vars) r0 <- cbind(results0[13:24, ], results0[25:36, ]) rownames(r0) <- c() names(r0) <- c("Decision Variable","Value","Decision Variable","Value") kbl(r0, booktabs = TRUE) %>% kable_styling(bootstrap_options = c("striped","condensed"),full_width = F) Decision Variable Value Decision Variable Value r1 160.00000 h1 160.00000 r2 160.00000 h2 160.00000 r3 160.00000 h3 160.00000 r4 89.44193 h4 89.44193 r5 40.00000 h5 40.00000 r6 130.00000 h6 130.00000 r7 230.00000 h7 160.00000 r8 197.58616 h8 160.00000 r9 160.00000 h9 160.00000 r10 112.03054 h10 112.03054 r11 96.96217 h11 96.96217 r12 105.45502 h12 105.45502 Figure 12.8: Reservoir releases and hydropower water use for optimal solution. For this optimal solution, the releases exceed the capacity of the penstock supplying the hydropower plant in July and August, meaning there would be reservoir spill during those months. Another part of the output that is important is to what degree irrigation demand is met. the irrigation delivery is associated with decision variables with indices 37-48. Decision Variable Value Irrigation Demand, 10\\(^6\\)m\\(^3\\) d1 0.0000 0 d2 0.0000 0 d3 0.0000 0 d4 0.0000 0 d5 40.0000 40 d6 130.0000 130 d7 230.0000 230 d8 197.5862 250 d9 160.0000 180 d10 110.0000 110 d11 0.0000 0 d12 0.0000 0 August and September see a shortfall in irrigation deliveries where full demand is not met. Finally, finding which constraints are binding can provide insights into how a system might be modified to improve the optimal solution. This is done similarly to the simpler problem above, by retrieving the duals portion of the sensitivity results. To address the question of whether the size of the reservoir is a binding constraint, that is, whether increasing reservoir size would improve the optimal results, only the first 12 constraints are printed. m <- length(get.constraints(reser.lp)) # retrieve the number of constraints duals <- get.sensitivity.rhs(reser.lp)$duals[1:m] results2 <- data.frame(Constraint=c(seq(1:m)),Multiplier=duals) kbl(results2[1:12,], booktabs = TRUE) %>% kable_styling(bootstrap_options = c("striped","condensed"),full_width = F) Constraint Multiplier 1 0 2 0 3 0 4 0 5 450 6 0 7 0 8 0 9 0 10 0 11 0 12 0 For this example, in only one month would a larger reservoir have a positive impact on the maximum revenue. 12.3 More Realistic Reservoir Operation: non-linear programming While the simple examples above illustrate how an optimal solution can be determined for a linear (and deterministic) reservoir system, in reality reservoirs are much more complex. Most reservoir operation studies use sophisticated software to develop and apply Rule Curves for reservoirs, aiming to optimally store and release water, preserving the storage pools as needed. Figure 12.9 shows how reservoir volumes are managed. Figure 12.9: Sample reservoir operating goals U.S. Army Corps of Engineers Many rule curves depend on the condition of the system at some prior time. Figure 12.10 shows a rule curve used to operate Folsom Reservoir on the American River in California, where the target storage depends on the total upstream storage available. Figure 12.10: Multiple rule corves based on upstream storage U.S. Army Corps of Engineers Report RD-48 One method for deriving an optimal solution for the nonlinear and random processes in a water resources system is stochastic dynamic programming (SDP). Like LP, SDP uses algorithms that optimize an objective function under specified constraints. However, SDP can accommodate non-linear, dynamic outcomes, such as those associated with floods risks or other stochastic events. SDP can combine the stochastic information with reservoir management actions, where the outcome of decisions can be dependent on the state of the system (as in Figure 12.10). Constraints can be set to be met a certain percentage of the time, rather than always. 12.3.1 Reservoir operation While SDP is a topic that is far more advanced than what will be covered here, one R package will be introduced. For reservoir optimization, the R package reservoir can use SDP to derive an optimal operating rule for a reservoir given a sequence of inflows using a single or multiple constraints. The package can also take any derived rule curve and operate a reservoir using it, which is what will be demonstrated here. First, place the optimal releases, according to the LP above, into a new vector to be used as a set of target releases for the reservoir operation. target_release <- results0[13:24, ]$value The reservoir can be operated (for the same 12-month period, with the same 12 inflows as above) with a single command. x <- reservoir::simRes(inflows, target_release, res_cap, plot = F) The total revenue from hydropower generation and irrigation deliveries is computed as follows. irrig_releases <- pmin(x$releases,irrig_dem) irrig_benefits <- sum(irrig_releases*revenue_water) hydro_releases <- pmin(x$releases,penstock_cap) hydro_benefits <- hydro_releases*revenue_power sum(irrig_benefits,hydro_benefits) #> [1] 1230930 Unsurprisingly, this produces the same result as with the LP example. 12.3.2 Performing stochastic dynamic programming The optimal releases, or target releases, were established based on a single year. the SDP in the reservoir package can be used to determine optimal releases based on a time series of inflows. Here the entire 20-year inflow sequence is used to generate a multiobjective optimal solution for the system. A weighting must be applied to describe the importance of meeting different parts of the objective function. The target release(s) cannot be zero, so a small constant is added. weight_water <- revenue_water/(revenue_water + revenue_power) weight_power <- revenue_power/(revenue_water + revenue_power) z <- reservoir::sdp_multi(inflows_20years, cap=res_cap, target = irrig_dem+0.01, R_max = penstock_cap, spill_targ = 0.95, weights = c(weight_water, weight_power, 0.00), loss_exp = c(1, 1, 1), tol=0.99, S_initial=0.5, plot=FALSE) irrig_releases2 <- pmin(z$releases,irrig_dem) irrig_benefits2 <- sum(irrig_releases2*revenue_water) hydro_releases2 <- pmin(z$releases,penstock_cap) hydro_benefits2 <- hydro_releases2*revenue_power sum(irrig_benefits2,hydro_benefits2)/20 #> [1] 911240 For a 20-year period, the average annual revenue will always be less than that for a single year where the optimal releases are designed based on that same year. "],["groundwater.html", "Chapter 13 Groundwater", " Chapter 13 Groundwater Figure 13.1: A conceptual aquifer with a pumping well U.S. Geological survey Groundwater content is forthcoming… "],["references.html", "References", " References Allen, R. G., & United Nations, F. and A. O. of the (Eds.). (1998). Crop evapotranspiration: Guidelines for computing crop water requirements. Rome: Food; Agriculture Organization of the United Nations. Astagneau, P. C., Thirel, G., Delaigue, O., Guillaume, J. H. A., Parajka, J., Brauer, C. C., et al. (2021). Technical note: Hydrology modelling R packages – a unified analysis of models and practicalities from a user perspective. Hydrology and Earth System Sciences, 25(7), 3937–3973. https://doi.org/10.5194/hess-25-3937-2021 Camp, T. R. (1946). Design of sewers to facilitate flow. Sewage Works Journal, 18, 3–16. Davidian, Jacob. (1984). Computation of water-surface profiles in open channels (No. Techniques of Water-Resources Investigations, Book 3, Chapter A15). https://doi.org/10.3133/twri03A15 Ductile Iron Pipe Research Association. (2016). Thrust Restraint Design for Ductile Iron Pipe, Seventh Edition. Retrieved from https://dipra.org/technical-resources England, J.F., Cohn, T.A., Faber, B.A., Stedinger, J.R., Thomas, W.O., Veilleux, A.G., et al. (2019). Guidelines for Determining Flood Flow Frequency Bulletin 17C (Techniques and {Methods}) (p. 148). Reston, Virginia: U.S. Department of the Interior, U.S. Geological Survey. Retrieved from https://doi.org/10.3133/tm4B5 Finnemore, E. J., & Maurer, E. (2024). Fluid mechanics with civil engineering applications (Eleventh edition). New York: McGraw-Hill. Fox-Kemper, B., Hewitt, H., Xiao, C., Aðalgeirsdóttir, G., Drijfhout, S., Edwards, T., et al. (2021). Ocean, Cryosphere and Sea Level Change. In Climate Change 2021: The physical science basis. Contribution of working group I to the sixth assessment report of the intergovernmental panel on climate change. Masson-, V. Delmotte, P. Zhai, A. Pirani, SL Connors, C. Péan, S. Berger, et …. Haaland, S. E. (1983). Simple and Explicit Formulas for the Friction Factor in Turbulent Pipe Flow. Journal of Fluids Engineering, 105(1), 89–90. https://doi.org/10.1115/1.3240948 Helsel, D.R., Hirsch, R.M., Ryberg, K.R., Archfield, S.A., & Gilroy, E.J. (2020). Statistical methods in water resources: U.S. Geological Survey Techniques and Methods, book 4, chap. A3 (p. 458). U.S. Geological Survey. Retrieved from https://doi.org/10.3133/tm4a3 Loucks, D. P., & Van Beek, E. (2017). Water Resource Systems Planning and Management. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-44234-1 Lovelace, R., Nowosad, J., & Münchow, J. (2019). Geocomputation with R. Boca Raton: CRC Press, Taylor; Francis Group, CRC Press is an imprint of theTaylor; Francis Group, an informa Buisness, A Chapman & Hall Book. Marshall, J. D., & Toffel, M. W. (2005). Framing the Elusive Concept of Sustainability: A Sustainability Hierarchy. Environmental Science & Technology, 39(3), 673–682. https://doi.org/10.1021/es040394k McCuen, R. (2016). Hydrologic analysis and design, 4th. Pearson Education. Moore, J., Chatsaz, M., d’Entremont, A., Kowalski, J., & Miller, D. (2022). Mechanics Map Open Textbook Project: Engineering Mechanics. Retrieved from https://eng.libretexts.org/Bookshelves/Mechanical_Engineering/Mechanics_Map_(Moore_et_al.) Pebesma, E. J., & Bivand, R. (2023). Spatial data science: With applications in R (First edition). Boca Raton, FL: CRC Press. Peterka, Alvin J. (1978). Hydraulic design of stilling basins and energy dissipators. Department of the Interior, Bureau of Reclamation. R Core Team. (2022). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org/ Searcy, J. K., & Hardison, C. H. (1960). Double-mass curves. US Government Printing Office. Slater, L. J., Thirel, G., Harrigan, S., Delaigue, O., Hurley, A., Khouakhi, A., et al. (2019). Using R in hydrology: A review of recent developments and future directions. Hydrology and Earth System Sciences, 23(7), 2939–2963. https://doi.org/10.5194/hess-23-2939-2019 Sturm, T. W. (2021). Open Channel Hydraulics (3rd Edition). New York: McGraw-Hill Education. Retrieved from https://www.accessengineeringlibrary.com/content/book/9781260469707 Swamee, P. K., & Jain, A. K. (1976). Explicit Equations for Pipe-Flow Problems. Journal of the Hydraulics Division, 102(5), 657–664. https://doi.org/10.1061/JYCEAJ.0004542 Tebaldi, C., Strauss, B. H., & Zervas, C. E. (2012). Modelling sea level rise impacts on storm surges along US coasts. Environmental Research Letters, 7(1), 014032. https://doi.org/10.1088/1748-9326/7/1/014032 Wurbs, R. A., & James, W. P. (2002). Water resources engineering. Upper Saddle River, NJ: Prentice Hall. "]]