This is the final paper for the project that I collaborated on with William P. Roeder at the 45th Weather Squadron (45 WS). The goal of this project was to improve the minimum temperature predictions that are made by the 45 WS for space launch operations at the Cape Canaveral Air Force Station (CCAFS) and the Kennedy Space Center (KSC). At the end of this project, the minimum temperature predictions made by the 45 WS were significantly improved, and the 45 WS began using the new minimum temperature algorithm during the 2014/2015 winter season. This project was one major step aimed at improving the minimum temperature tool.
Optimization of the 45th weather squadron’s linear first guess equation prese...James Brownlee
This document describes research to optimize the 45th Weather Squadron's linear "first guess" equation for predicting minimum temperatures. The previous equation was optimized for warmer temperatures and had a warm bias. A new equation was developed using colder temperature data from 1986-2014 and optimized using regression analysis, reducing the RMSE by 64.5% and bias by 85.1% when verified on 2010-2014 data. Removing some outlier data points resulted in more realistic error statistics. Further optimization of correction factors, particularly clouds, is recommended to improve the overall minimum temperature prediction tool.
Isentropic Blow-Down Process and Discharge CoefficientSteven Cooke
The document describes an experiment to study the transient discharge of a pressurized tank through orifices of varying diameters, as well as a long tube, and compare the actual blowdown processes to an ideal isentropic process. An MKS pressure transducer and T-type thermocouple were calibrated. Pressure and temperature data were recorded during blowdown for each orifice/tube. The actual temperature decayed much more than the calculated isentropic temperature due to heat transfer. Discharge coefficients were calculated and ranged from 0.59 to 0.71, decreasing with smaller orifices/tubes due to friction.
The document discusses the second law of thermodynamics. It defines a heat engine as a system that develops net work from a heat supply, requiring both a hot and cold reservoir. The second law states that the gross heat supplied must be greater than the net work done, meaning the efficiency of a heat engine is always less than 100%. Entropy is introduced as a property that enables the representation of heat flow on temperature-entropy diagrams. Such diagrams are shown and described for steam.
This document describes the simulated annealing algorithm. It begins with an introduction that outlines the document. It then provides a formal definition of simulated annealing, explaining how it is analogous to the way metals cool and form crystalline structures. The core of the document describes the simulated annealing algorithm, including initializing a solution, evaluating solutions, making random changes, applying a criterion to accept or reject changes, and gradually lowering the temperature. It provides an example of applying the algorithm to the 40 queens puzzle. It also discusses tuning the algorithm by setting the initial and final temperatures. In conclusion, it covers the advantages, disadvantages, and applications of simulated annealing.
Agu chen a31_g-2917_retrieving temperature and relative humidity profiles fro...Maosi Chen
Atmospheric temperature and relative humidity profiles are fundamental for atmospheric research such as numerical weather prediction and climate change assessment. Hyperspectral satellite data contain a wealth of relevant information and have been used in many algorithms (e.g. regression-based methods) to retrieve these profiles. Deep Learning or Deep Neural Network (DNN) is capable of finding complex relationships (functions) between pairs of input and output variables by assembling many simple non-linear modules together and learning the parameters therein from large amounts of observations. DNN has been successfully applied in many fields (such as image classification, object detection, language translation). In this study, we explored the potential of retrieving atmospheric profiles from hyperspectral satellite radiation data using DNN. The requirement for applying the DNN technique is satisfied with large amount of hyperspectral radiance data provided by United States Suomi National Polar (NPP) Cross-track Infrared Sounder (CrIS) and the reanalyzed atmospheric profiles data provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The proposed DNN consists of two consecutive parts. In the first part, the first 1245 bands of the NPP CrIS hyperspectral radiance data (648.75 to 2555 cm-1) are compressed into a 300-element vector representing their key features by stacked AutoEncoders. Then, in the second part, the multi-layer Self-Normalizing Neural Network (SNN) is used to map the compressed vector (of 300 elements) into 55-layer temperature and relative humidity profiles. The DNN trainable variables are optimized by minimizing the difference of its predictions and the matched ECMWF temperature and humidity profiles (53230 samples). Finally, the DNN retrieved atmospheric temperature and relative humidity profiles and those provided by the NOAA Unique Combined Atmospheric Processing System (NUCAPS, the official retrieval products for CrIS) are compared with the matched radiosonde observations at one location.
This document discusses the use of an "over-conductivity function" to model the natural cooling process in steam turbines. It summarizes previous research on modeling natural cooling and validates the over-conductivity approach on three additional turbines. The over-conductivity function replaces complex fluid dynamics with an equivalent higher conductivity, allowing faster simulations while maintaining 15-18°C accuracy compared to temperature measurements during natural cooling periods of over 100 hours.
Efficiency of change of state of gases apparatusFaizan Shabbir
The experiment aims to determine the efficiency of changing a gas's state by measuring the work done on the gas during compression using Boyle's law, timing how long it takes to compress the gas fully, and calculating efficiency by dividing the power output by the power input from the battery over time. The apparatus includes a chamber to compress the gas and a stopwatch to time compression. Gas pressure and volume are recorded at different states to calculate work done using the formula for constant temperature.
The document summarizes Chapter 7 of a textbook on thermodynamics. It includes in-text concept questions, concept problems, sections on heat engines/refrigerators, the second law and processes, Carnot cycles and absolute temperature, finite temperature heat transfer, and ideal gas Carnot cycles. It also includes review problems at the end. The chapter examines concepts related to heat engines, refrigerators, the second law of thermodynamics, and Carnot cycles.
Optimization of the 45th weather squadron’s linear first guess equation prese...James Brownlee
This document describes research to optimize the 45th Weather Squadron's linear "first guess" equation for predicting minimum temperatures. The previous equation was optimized for warmer temperatures and had a warm bias. A new equation was developed using colder temperature data from 1986-2014 and optimized using regression analysis, reducing the RMSE by 64.5% and bias by 85.1% when verified on 2010-2014 data. Removing some outlier data points resulted in more realistic error statistics. Further optimization of correction factors, particularly clouds, is recommended to improve the overall minimum temperature prediction tool.
Isentropic Blow-Down Process and Discharge CoefficientSteven Cooke
The document describes an experiment to study the transient discharge of a pressurized tank through orifices of varying diameters, as well as a long tube, and compare the actual blowdown processes to an ideal isentropic process. An MKS pressure transducer and T-type thermocouple were calibrated. Pressure and temperature data were recorded during blowdown for each orifice/tube. The actual temperature decayed much more than the calculated isentropic temperature due to heat transfer. Discharge coefficients were calculated and ranged from 0.59 to 0.71, decreasing with smaller orifices/tubes due to friction.
The document discusses the second law of thermodynamics. It defines a heat engine as a system that develops net work from a heat supply, requiring both a hot and cold reservoir. The second law states that the gross heat supplied must be greater than the net work done, meaning the efficiency of a heat engine is always less than 100%. Entropy is introduced as a property that enables the representation of heat flow on temperature-entropy diagrams. Such diagrams are shown and described for steam.
This document describes the simulated annealing algorithm. It begins with an introduction that outlines the document. It then provides a formal definition of simulated annealing, explaining how it is analogous to the way metals cool and form crystalline structures. The core of the document describes the simulated annealing algorithm, including initializing a solution, evaluating solutions, making random changes, applying a criterion to accept or reject changes, and gradually lowering the temperature. It provides an example of applying the algorithm to the 40 queens puzzle. It also discusses tuning the algorithm by setting the initial and final temperatures. In conclusion, it covers the advantages, disadvantages, and applications of simulated annealing.
Agu chen a31_g-2917_retrieving temperature and relative humidity profiles fro...Maosi Chen
Atmospheric temperature and relative humidity profiles are fundamental for atmospheric research such as numerical weather prediction and climate change assessment. Hyperspectral satellite data contain a wealth of relevant information and have been used in many algorithms (e.g. regression-based methods) to retrieve these profiles. Deep Learning or Deep Neural Network (DNN) is capable of finding complex relationships (functions) between pairs of input and output variables by assembling many simple non-linear modules together and learning the parameters therein from large amounts of observations. DNN has been successfully applied in many fields (such as image classification, object detection, language translation). In this study, we explored the potential of retrieving atmospheric profiles from hyperspectral satellite radiation data using DNN. The requirement for applying the DNN technique is satisfied with large amount of hyperspectral radiance data provided by United States Suomi National Polar (NPP) Cross-track Infrared Sounder (CrIS) and the reanalyzed atmospheric profiles data provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The proposed DNN consists of two consecutive parts. In the first part, the first 1245 bands of the NPP CrIS hyperspectral radiance data (648.75 to 2555 cm-1) are compressed into a 300-element vector representing their key features by stacked AutoEncoders. Then, in the second part, the multi-layer Self-Normalizing Neural Network (SNN) is used to map the compressed vector (of 300 elements) into 55-layer temperature and relative humidity profiles. The DNN trainable variables are optimized by minimizing the difference of its predictions and the matched ECMWF temperature and humidity profiles (53230 samples). Finally, the DNN retrieved atmospheric temperature and relative humidity profiles and those provided by the NOAA Unique Combined Atmospheric Processing System (NUCAPS, the official retrieval products for CrIS) are compared with the matched radiosonde observations at one location.
This document discusses the use of an "over-conductivity function" to model the natural cooling process in steam turbines. It summarizes previous research on modeling natural cooling and validates the over-conductivity approach on three additional turbines. The over-conductivity function replaces complex fluid dynamics with an equivalent higher conductivity, allowing faster simulations while maintaining 15-18°C accuracy compared to temperature measurements during natural cooling periods of over 100 hours.
Efficiency of change of state of gases apparatusFaizan Shabbir
The experiment aims to determine the efficiency of changing a gas's state by measuring the work done on the gas during compression using Boyle's law, timing how long it takes to compress the gas fully, and calculating efficiency by dividing the power output by the power input from the battery over time. The apparatus includes a chamber to compress the gas and a stopwatch to time compression. Gas pressure and volume are recorded at different states to calculate work done using the formula for constant temperature.
The document summarizes Chapter 7 of a textbook on thermodynamics. It includes in-text concept questions, concept problems, sections on heat engines/refrigerators, the second law and processes, Carnot cycles and absolute temperature, finite temperature heat transfer, and ideal gas Carnot cycles. It also includes review problems at the end. The chapter examines concepts related to heat engines, refrigerators, the second law of thermodynamics, and Carnot cycles.
This document summarizes a study that used the Surface Energy Balance System (SEBS) model with MODIS satellite data and NCEP reanalysis data to estimate sensible heat flux in the Arou region of China from May to September 2011. The SEBS-estimated sensible heat fluxes showed good agreement with in-situ measurements from a Large Aperture Scintillometer, especially from July to September when vegetation cover was densest. A sensitivity analysis found that sensible heat flux was most sensitive to temperature difference between surface and reference height and surface roughness length.
This document discusses various concepts related to simulated annealing including the acceptance function, initial temperature, equilibrium state, cooling schedule, stopping condition, and handling constraints. It describes how the acceptance of non-improving moves is based on temperature and change in objective function. It also provides examples of different cooling schedules and discusses how to determine equilibrium state and stopping criteria. The document concludes with applying simulated annealing to solve the knapsack problem.
This document provides an introduction to basic thermodynamics concepts. It defines key terms like system, boundary, surroundings, open and closed systems. It explains the differences between intensive and extensive properties, and defines state, process, and cycle. The document also covers the first law of thermodynamics, the differences between work and heat transfer, sign conventions, and the concept of internal energy. The objectives are to understand these fundamental concepts and the first law of thermodynamics.
first law of thermodynamics and second lawnaphis ahamad
This document discusses the first law of thermodynamics and conservation of energy. It explains that the first law states that energy cannot be created or destroyed, only transformed between different forms. The total energy in a closed system remains constant. The document provides several examples of applying the first law to closed systems, control volumes, and various thermodynamic processes like isochoric, isobaric, and polytropic processes. It also discusses other concepts like the conservation of mass, work done by fluids, and applying energy balances to devices like nozzles, turbines, and heat exchangers.
This document provides an overview of basic thermodynamics concepts including:
- The objectives of understanding the laws of thermodynamics and their constants.
- Definitions of perfect gases and their properties of pressure, volume, and temperature.
- Explanations of Boyle's Law, Charles' Law, and the Universal Gas Law.
- Introduction of specific heat capacity at constant volume and constant pressure.
- Examples demonstrating applications of the gas laws and calculations involving specific heat.
The document discusses a heat exchanger network synthesis project. It analyzes the heat flow of an industrial process initially and with a proposed heat integration case using pinch analysis. The initial case shows no heat recovery, while the proposed case introduces a cold utility and identifies a large heat recovery pocket. Utility savings of the proposed case are estimated at 98.2% compared to the initial case and base case without integration.
This document discusses control strategies used to compensate for beam-induced heat loads in the Large Hadron Collider's (LHC) cryogenic systems in real-time. The LHC beam deposits significant heat in beam screens through synchrotron radiation, image currents, and electron clouds. Several control strategies were developed, including feed-forward compensation that forecasts beam-induced heat loads to preemptively adjust cryogenic systems. These strategies were modeled and simulated before successful deployment in 2015, allowing the LHC to operate at full energy while maintaining stable beam screen temperatures despite dynamic heat loads.
The document proposes replacing pressure reducing valves (PRVs) with backpressure steam turbine-generators at a heat plant to increase efficiency. Currently, PRVs reduce steam pressure from 180 psig to 25 psig without recovering work. Installing a turbine would capture this pressure reduction as electricity. Calculations show the turbine cycle would achieve an efficiency of 77.2% versus 76.6% for the PRV cycle. Case studies at other universities found turbines reduced emissions by 1,200-2,000 metric tons annually and saved $120,000-275,000 per year. A pre-design analysis identified space in the plant basement as optimal for a turbine, which could fit through double doors and utilize existing steam piping.
1. The student conducted an experiment to verify Fourier's law of heat conduction and determine how heat transfers linearly through a material.
2. The apparatus included a display and control unit, measuring object, and setups for radial and linear heat conduction experiments.
3. Temperature readings were taken from 9 sensors placed along the object and calculations were done to find the thermal conductivity (K) at each point using the heat (Q) supplied, temperature difference, and distance between sensors.
The document contains examples of problems related to applied thermodynamics and heat engines. It includes 6 examples that cover topics like determining interface temperatures, heat transfer in heat exchangers, radiation from blackbodies, compression of gases, heating of water, and heat transfer over flat plates. The examples provide calculations and step-by-step workings to arrive at the solutions.
Hmt lab manual (heat and mass transfer lab manual)Awais Ali
This document describes procedures for 7 experiments on heat transfer:
1. Investigates Fourier's Law of heat conduction along a brass bar by measuring temperatures at points along the bar for different heat inputs.
2. Studies heat conduction along a composite bar and calculates the overall heat transfer coefficient.
3. Examines the effect of cross-sectional area changes on temperature profiles in a conductor.
4. Determines temperature profiles and heat transfer rates from radial conduction through a cylinder wall.
5. Measures thermal conductivity of non-metallic materials and compares to theory.
6. Determines thermal conductivity of liquids and gases.
7. Investigates the relationship between power input and surface temperature for free convection
This document compares two interpolation techniques, inverse distance weighting (IDW) and kriging, for estimating temperature values across California. The author conducts case studies at large (statewide) and small (San Francisco Bay Area) scales, comparing the accuracy and processing time of each model. At the large scale, kriging performed better with a mean absolute error of 0.9°F compared to 0.29°F for IDW. At the small scale, both techniques performed similarly with kriging having a slightly lower error of 0.01°F. The author concludes kriging is generally more accurate but also more complex to implement than the simpler IDW approach.
This document discusses how to calculate cooling requirements for a data center. It explains that the total heat output of the data center needs to be estimated by calculating the heat from IT equipment, UPS systems, lighting, people, and other sources. Common conversion factors and design guideline values are used to convert between measurement units like Watts, BTUs, and tons. A case study then demonstrates how to calculate the heat output subtotals and total for an example data center with details on its IT load, floor area, and staff. It emphasizes that the air conditioning system capacity should be at least 1.3 times the total heat load to ensure adequate cooling and redundancy.
This document contains multiple problems involving ideal gas processes. The first problem describes a steady flow compressor handling nitrogen with known intake conditions and discharge pressure. It asks to determine the final temperature and work for two process types. The second problem involves air in a cylinder being compressed in a polytropic process with known initial and final pressures and temperatures. It asks to determine the work and heat transfer. The third problem describes a gas turbine expanding helium polytropically and asks to determine the final pressure, power produced, heat loss, and entropy change.
This document provides an overview of thermodynamic concepts related to steady flow processes involving steam and water in common power plant devices. It describes the basic functions and analysis of a steam boiler, steam turbine, steam condenser, and mixing chamber. The analysis involves applying the continuity equation and the steady flow energy equation to determine heat transfer rates and work output. Sample problems are provided to illustrate the use of thermodynamic property tables and calculations for each device.
This document summarizes a study that used artificial neural networks to model and identify dynamic indoor thermal comfort based on the PMV index. The study developed equations to model thermal comfort based on factors like air temperature, humidity, clothing insulation, and metabolism. An artificial neural network was then trained using these equations to approximate the nonlinear relationship between inputs like temperature and outputs like predicted mean vote. Simulation results showed the neural network model could accurately track desired thermal sensations and matched existing fuzzy logic models of human thermal comfort. The neural network approach provides a practical method for real-time identification of thermal comfort that is better than traditional manual calculations.
The document describes the steam power cycle. It begins by introducing steam as the most common working fluid in heat engine cycles due to its desirable properties. It then discusses the Carnot cycle as the most theoretically efficient cycle, but one that is not practical. The Rankine cycle is introduced as the modified, practical cycle used in steam power plants. In the Rankine cycle, steam is fully condensed in the condenser before being pumped back to the boiler, unlike the Carnot cycle. The key components of the Rankine cycle are the boiler, turbine, condenser, and pump.
Thermodynamics Assignment 02 contains calculations for various cycles of a steam power plant operating between 40 bar and 0.04 bar:
1) Carnot, simple Rankine, and modified Rankine cycles are analyzed. The modified Rankine cycle with superheat has the highest efficiency of 40.86% and lowest SSC of 2.4820 kg/kWh.
2) "Metallurgical limit" refers to the maximum safe pressures and temperatures a power plant's components can withstand without damage.
3) Implementing reheating in the Rankine cycle increases efficiency to 41.05% and lowers SSC to 2.4663 kg/kWh by utilizing the steam's initial high temperature again
The document contains 7 examples of thermodynamics calculations involving concepts like steam tables, work, heat, ideal gases, refrigeration cycles, and processes involving gases. The examples calculate things like inlet and outlet steam pressures, net work done by systems, heat transferred in cycles, minimum heat rejection rate, refrigeration power requirement, work done by compressing an ideal gas, and net work in a sequence of gas processes.
Kaushal Kumar Singh is a senior executive with over 15 years of experience in civil engineering projects. He has expertise in restoration and rehabilitation of old buildings and structures. Currently he is working as manager of infrastructure and civil maintenance at JSW Steel Limited in Mumbai. Previously he held roles like vice president of projects at Varshitha Concrete Technologies Pvt Ltd and site engineer at Repcon. He has led large project teams and handled various types of civil works projects.
This document contains information about a NTSE test for students, including:
- Details about the test such as the date, time limit, and maximum marks
- Sections on Chemistry, Physics, and Mathematics, each with multiple choice questions and answer keys
- Contact information for Zignasa, the test preparation company, including their head office and branch office addresses and phone numbers
An analysis of a sea breeze boundary in floridaJames Brownlee
This study analyzed a sea breeze boundary that occurred on June 12th in Florida using radar observations, satellite imagery, and ground measurements. Radar detected a thin line 19 kilometers ahead of the observed sea breeze passage. It took the radar-observed boundary two hours to reach the ground observation point. The radar thin line was likely caused by insects rather than marking the true sea breeze boundary location near the surface. The results suggest radar can observe sea breeze boundaries ahead of their actual surface location.
This document summarizes a study that used the Surface Energy Balance System (SEBS) model with MODIS satellite data and NCEP reanalysis data to estimate sensible heat flux in the Arou region of China from May to September 2011. The SEBS-estimated sensible heat fluxes showed good agreement with in-situ measurements from a Large Aperture Scintillometer, especially from July to September when vegetation cover was densest. A sensitivity analysis found that sensible heat flux was most sensitive to temperature difference between surface and reference height and surface roughness length.
This document discusses various concepts related to simulated annealing including the acceptance function, initial temperature, equilibrium state, cooling schedule, stopping condition, and handling constraints. It describes how the acceptance of non-improving moves is based on temperature and change in objective function. It also provides examples of different cooling schedules and discusses how to determine equilibrium state and stopping criteria. The document concludes with applying simulated annealing to solve the knapsack problem.
This document provides an introduction to basic thermodynamics concepts. It defines key terms like system, boundary, surroundings, open and closed systems. It explains the differences between intensive and extensive properties, and defines state, process, and cycle. The document also covers the first law of thermodynamics, the differences between work and heat transfer, sign conventions, and the concept of internal energy. The objectives are to understand these fundamental concepts and the first law of thermodynamics.
first law of thermodynamics and second lawnaphis ahamad
This document discusses the first law of thermodynamics and conservation of energy. It explains that the first law states that energy cannot be created or destroyed, only transformed between different forms. The total energy in a closed system remains constant. The document provides several examples of applying the first law to closed systems, control volumes, and various thermodynamic processes like isochoric, isobaric, and polytropic processes. It also discusses other concepts like the conservation of mass, work done by fluids, and applying energy balances to devices like nozzles, turbines, and heat exchangers.
This document provides an overview of basic thermodynamics concepts including:
- The objectives of understanding the laws of thermodynamics and their constants.
- Definitions of perfect gases and their properties of pressure, volume, and temperature.
- Explanations of Boyle's Law, Charles' Law, and the Universal Gas Law.
- Introduction of specific heat capacity at constant volume and constant pressure.
- Examples demonstrating applications of the gas laws and calculations involving specific heat.
The document discusses a heat exchanger network synthesis project. It analyzes the heat flow of an industrial process initially and with a proposed heat integration case using pinch analysis. The initial case shows no heat recovery, while the proposed case introduces a cold utility and identifies a large heat recovery pocket. Utility savings of the proposed case are estimated at 98.2% compared to the initial case and base case without integration.
This document discusses control strategies used to compensate for beam-induced heat loads in the Large Hadron Collider's (LHC) cryogenic systems in real-time. The LHC beam deposits significant heat in beam screens through synchrotron radiation, image currents, and electron clouds. Several control strategies were developed, including feed-forward compensation that forecasts beam-induced heat loads to preemptively adjust cryogenic systems. These strategies were modeled and simulated before successful deployment in 2015, allowing the LHC to operate at full energy while maintaining stable beam screen temperatures despite dynamic heat loads.
The document proposes replacing pressure reducing valves (PRVs) with backpressure steam turbine-generators at a heat plant to increase efficiency. Currently, PRVs reduce steam pressure from 180 psig to 25 psig without recovering work. Installing a turbine would capture this pressure reduction as electricity. Calculations show the turbine cycle would achieve an efficiency of 77.2% versus 76.6% for the PRV cycle. Case studies at other universities found turbines reduced emissions by 1,200-2,000 metric tons annually and saved $120,000-275,000 per year. A pre-design analysis identified space in the plant basement as optimal for a turbine, which could fit through double doors and utilize existing steam piping.
1. The student conducted an experiment to verify Fourier's law of heat conduction and determine how heat transfers linearly through a material.
2. The apparatus included a display and control unit, measuring object, and setups for radial and linear heat conduction experiments.
3. Temperature readings were taken from 9 sensors placed along the object and calculations were done to find the thermal conductivity (K) at each point using the heat (Q) supplied, temperature difference, and distance between sensors.
The document contains examples of problems related to applied thermodynamics and heat engines. It includes 6 examples that cover topics like determining interface temperatures, heat transfer in heat exchangers, radiation from blackbodies, compression of gases, heating of water, and heat transfer over flat plates. The examples provide calculations and step-by-step workings to arrive at the solutions.
Hmt lab manual (heat and mass transfer lab manual)Awais Ali
This document describes procedures for 7 experiments on heat transfer:
1. Investigates Fourier's Law of heat conduction along a brass bar by measuring temperatures at points along the bar for different heat inputs.
2. Studies heat conduction along a composite bar and calculates the overall heat transfer coefficient.
3. Examines the effect of cross-sectional area changes on temperature profiles in a conductor.
4. Determines temperature profiles and heat transfer rates from radial conduction through a cylinder wall.
5. Measures thermal conductivity of non-metallic materials and compares to theory.
6. Determines thermal conductivity of liquids and gases.
7. Investigates the relationship between power input and surface temperature for free convection
This document compares two interpolation techniques, inverse distance weighting (IDW) and kriging, for estimating temperature values across California. The author conducts case studies at large (statewide) and small (San Francisco Bay Area) scales, comparing the accuracy and processing time of each model. At the large scale, kriging performed better with a mean absolute error of 0.9°F compared to 0.29°F for IDW. At the small scale, both techniques performed similarly with kriging having a slightly lower error of 0.01°F. The author concludes kriging is generally more accurate but also more complex to implement than the simpler IDW approach.
This document discusses how to calculate cooling requirements for a data center. It explains that the total heat output of the data center needs to be estimated by calculating the heat from IT equipment, UPS systems, lighting, people, and other sources. Common conversion factors and design guideline values are used to convert between measurement units like Watts, BTUs, and tons. A case study then demonstrates how to calculate the heat output subtotals and total for an example data center with details on its IT load, floor area, and staff. It emphasizes that the air conditioning system capacity should be at least 1.3 times the total heat load to ensure adequate cooling and redundancy.
This document contains multiple problems involving ideal gas processes. The first problem describes a steady flow compressor handling nitrogen with known intake conditions and discharge pressure. It asks to determine the final temperature and work for two process types. The second problem involves air in a cylinder being compressed in a polytropic process with known initial and final pressures and temperatures. It asks to determine the work and heat transfer. The third problem describes a gas turbine expanding helium polytropically and asks to determine the final pressure, power produced, heat loss, and entropy change.
This document provides an overview of thermodynamic concepts related to steady flow processes involving steam and water in common power plant devices. It describes the basic functions and analysis of a steam boiler, steam turbine, steam condenser, and mixing chamber. The analysis involves applying the continuity equation and the steady flow energy equation to determine heat transfer rates and work output. Sample problems are provided to illustrate the use of thermodynamic property tables and calculations for each device.
This document summarizes a study that used artificial neural networks to model and identify dynamic indoor thermal comfort based on the PMV index. The study developed equations to model thermal comfort based on factors like air temperature, humidity, clothing insulation, and metabolism. An artificial neural network was then trained using these equations to approximate the nonlinear relationship between inputs like temperature and outputs like predicted mean vote. Simulation results showed the neural network model could accurately track desired thermal sensations and matched existing fuzzy logic models of human thermal comfort. The neural network approach provides a practical method for real-time identification of thermal comfort that is better than traditional manual calculations.
The document describes the steam power cycle. It begins by introducing steam as the most common working fluid in heat engine cycles due to its desirable properties. It then discusses the Carnot cycle as the most theoretically efficient cycle, but one that is not practical. The Rankine cycle is introduced as the modified, practical cycle used in steam power plants. In the Rankine cycle, steam is fully condensed in the condenser before being pumped back to the boiler, unlike the Carnot cycle. The key components of the Rankine cycle are the boiler, turbine, condenser, and pump.
Thermodynamics Assignment 02 contains calculations for various cycles of a steam power plant operating between 40 bar and 0.04 bar:
1) Carnot, simple Rankine, and modified Rankine cycles are analyzed. The modified Rankine cycle with superheat has the highest efficiency of 40.86% and lowest SSC of 2.4820 kg/kWh.
2) "Metallurgical limit" refers to the maximum safe pressures and temperatures a power plant's components can withstand without damage.
3) Implementing reheating in the Rankine cycle increases efficiency to 41.05% and lowers SSC to 2.4663 kg/kWh by utilizing the steam's initial high temperature again
The document contains 7 examples of thermodynamics calculations involving concepts like steam tables, work, heat, ideal gases, refrigeration cycles, and processes involving gases. The examples calculate things like inlet and outlet steam pressures, net work done by systems, heat transferred in cycles, minimum heat rejection rate, refrigeration power requirement, work done by compressing an ideal gas, and net work in a sequence of gas processes.
Kaushal Kumar Singh is a senior executive with over 15 years of experience in civil engineering projects. He has expertise in restoration and rehabilitation of old buildings and structures. Currently he is working as manager of infrastructure and civil maintenance at JSW Steel Limited in Mumbai. Previously he held roles like vice president of projects at Varshitha Concrete Technologies Pvt Ltd and site engineer at Repcon. He has led large project teams and handled various types of civil works projects.
This document contains information about a NTSE test for students, including:
- Details about the test such as the date, time limit, and maximum marks
- Sections on Chemistry, Physics, and Mathematics, each with multiple choice questions and answer keys
- Contact information for Zignasa, the test preparation company, including their head office and branch office addresses and phone numbers
An analysis of a sea breeze boundary in floridaJames Brownlee
This study analyzed a sea breeze boundary that occurred on June 12th in Florida using radar observations, satellite imagery, and ground measurements. Radar detected a thin line 19 kilometers ahead of the observed sea breeze passage. It took the radar-observed boundary two hours to reach the ground observation point. The radar thin line was likely caused by insects rather than marking the true sea breeze boundary location near the surface. The results suggest radar can observe sea breeze boundaries ahead of their actual surface location.
Catherine M. Duncan is seeking an Airframe and Powerplant Technician position where she can utilize her education and over 20 years of experience maintaining aircraft and engines. She has a FAA Airframe and Powerplant license and an Associate's Degree in Criminal Justice. Her work history includes experience as an unlicensed mechanic for Airborne Maintenance, an engine support technician for Belcan Engineering, a flightline mechanic for Cessna Aircraft, and a jet engine mechanic in the U.S. Air Force working on KC-135 aircraft. She has skills in blueprint reading, troubleshooting, customer service, and team-based maintenance.
This document provides a summary of Marcela F. Thedim's professional experience and qualifications. She has over 20 years of experience in project management, operations management, and reliability engineering roles for companies in the oil and gas industry. She holds degrees in mechanical engineering and English education, as well as certifications in project management, auditing, Lean Six Sigma, and safety. Her experience includes roles managing offshore drilling projects and rig operations with a proven track record of improving safety, efficiency, and cost savings.
This is the final paper for our project in the Numerical Methods for Partial Differential Equations class. For this project, we had to apply different numerical schemes/algorithms to simulate a wave using the 2 dimensional advection equation. The goal of this project was to determine which numerical scheme was the most computationally stable or unstable, and to determine how this affected the simulation of a 2 dimensional wave.
Title of the ReportA. Partner, B. Partner, and C. Partner.docxjuliennehar
Title of the Report
A. Partner, B. Partner, and C. Partner
Abstract
The report abstract is a short summary of the report. It is usually one paragraph (100-200 words) and should include
about one or two sentences on each of the following main points:
1. Purpose of the experiment
2. Key results
3. Major points of discussion
4. Main conclusions
Tip: It may be helpful if you complete the other sections of the report before writing the abstract. You can basically
draw these four main points from them.
example: In this experiment a very important physical effect was studied by measuring the dependence of a quantity
V of the quantity X for two different sample temperatures. The experimental measurements confirmed the quadratic
dependence V = kX2 predicted by Someone’s first law. The value of the mystery parameter k = 15.4 ± 0.5 s was
extracted from the fit. This value is not consistent with the theoretically predicted ktheory = 17.34 s. This discrepancy
is attributed to low efficiency of the V -detector.
1. Introduction
This section is also often referred to as the purpose or
plan. It includes two main categories:
Purpose: It usually is expressed in one or two sen-
tences that include the main method used for accomplish-
ing the purpose of the experiment.
Ex: The purpose of the experiment was to determine
the mass of an ion using the mass spectrometer.
Background and theory: related to the experiment.
This includes explanations of theories, methods or equa-
tions used, etc.; for the example above, you might want to
explain the theory behind mass spectrometer and a short
description about the process and setup you used in the
experiment. It is important to remember that report needs
to be as straightforward as possible. You should comprise
only as much information as needed for the reader to un-
derstand the purpose and methods. Your should also pro-
vide additional information such as a hypothesis (what is
expected to happen in the experiment based on the theory)
or safety information. The main focus of the introduction
mainly focuses on supporting the reader to understand the
purpose, methods, and reasons for these particular meth-
ods.Purpose of the experiment
Example:
Calculation of the pressure coefficient Cp
From the lectures notes, Cp can be obtained by the eq.
(1)
− Cp =
P − P∞
1
2 ∗ ρ ∗ U2∞
(1)
Where P and P∞ are respectively the local pressure and
the atmosphere pressure far away. U∞ is the wind velocity
Preprint submitted to supervisor March 4, 2020
of the wind tunnel.
Calculation of the lift coefficient CL
First, the expression for the pressure force acting nor-
mal to the chord line is given in the lecture notes as eq.(2),
Cn =
∮
Cp(−n̂ ∗ ŷ)dl, (2)
with Cp the coefficient of lift and n̂ the unit normal
vector pointing out of the surface, ŷ is the unit vector in
the direction normal to the chord line. dl is the length of an
infinitesimal line element. Similarly, the axial component
can be express as eq.(3)
Ca ...
Research proposal: Thermoelectric cooling in electric vehicles KristopherKerames
This experiment aims to characterize a thermoelectric cooler (TEC) for cooling electric vehicle batteries by measuring its Seebeck coefficient and coefficient of performance (COP). A small-scale system using a hot plate, TEC module, and fan will simulate an EV battery cooling system. Temperature and voltage measurements taken with and without the hot plate will be used to calculate the Seebeck coefficient and COP of the TEC and determine the uncertainty in these values. The results will help engineers evaluate TECs for optimal battery thermal management.
This document discusses thermodynamic properties and calculations. It defines thermodynamic properties as quantities that characterize a system's overall state, like temperature, pressure, and volume. It also outlines the first and second laws of thermodynamics. The first law states that energy is conserved, while the second law concerns the direction of spontaneous processes and limits energy conversions. Examples are provided to demonstrate calculating work, heat, internal energy, and enthalpy changes for ideal gases undergoing various thermodynamic processes.
The document describes integrating a physical Nest smart thermostat into an agent-based model for simulating residential HVAC loads. Researchers developed a statistical model to simulate individual house HVAC usage and then aggregated them. They replaced the simulated thermostat for one house with a physical Nest thermostat. The Nest thermostat was installed in an environmental chamber controlled by a PID controller to mimic house temperature conditions. Data was sent between the simulation and Nest using its API. This allows the simulation to use the Nest's actual thermostat logic and responses, increasing simulation realism for demand response control algorithm development.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal,
ENGR202_69_group7_lab4partA_report.docxYIFANG WANG
This document summarizes an experiment conducted to measure temperature using a thermocouple. The experiment involved using a thermocouple and data logging software to record the cooling curve of a heated resistor over multiple trials. Time constants were calculated from the cooling curves and used to generate modified cooling curves that closely matched the experimental data. The goals of the experiment were to understand how to properly use a thermocouple to measure temperature changes and analyze the thermal properties of materials.
The document summarizes testing done on plate coil panels designed to regulate the temperature of the Daniel K. Inouye Solar Telescope enclosure. Testing found that the plate coils maintained a surface temperature about 2 degrees Celsius above ambient temperature rather than the desired 2 degrees below ambient. This suggests the coolant pumped through the coils may need to be set to a lower temperature. Additional testing is recommended at different angles and pressures to allow a more thorough analysis. Modifications like using a larger chiller or cooler coolant temperature are also proposed to improve performance.
This lab experiment investigated the relationship between temperature and pressure of a fixed quantity of air. The independent variable was temperature, the dependent variable was pressure, and the quantity of air was kept constant. Temperature and pressure data were collected as the flask of air was heated. The results showed a linear relationship, as expected, but the x-intercept representing absolute zero was much higher than expected. Sources of error included old equipment that may have been inaccurate and an experimental setup where the flask was not fully submerged for even heating.
The document summarizes a student laboratory experiment on temperature measurement using thermocouples. The students measured temperature by taking voltage readings from a T-type copper-constantan thermocouple over increments of 0.3 volts from 0 to 3.3 volts. They calculated the thermocouple constant K and used linear regression to determine the coefficients a and b of the best-fit line. Comparisons of the experimental data to the regression line and calibration curve showed errors, which the students attributed to inaccuracies in the thermocouple junctions and potential loss of calibration. The document concludes that care must be taken to minimize errors and that liquid-in-glass thermometers provide more accurate temperature measurements.
Experiment 4
Newtonian Cooling
EGME 306A
Group 2
ABSTRACT
The objective of this experiment is to understand the relationship between the change of temperature of an object and its surroundings. The Newtonian Cooling says that the temperature of an object is proportional to the temperature of the surrounding. The reason for the experiment is to make an experiment that measures temperature using a transducer of our own choice, understand the heat transfer and determine the coefficient of the heat transfer.
TABLE OF CONTENTS
Abstract ……………………………………………………………………………2
Table of Contents…………………………………………………………………..3
Introduction and Theory……………………………………………………….......4-9
Procedures………………………………………………………………………..10-11
Summary of Important Results…………………………………………………….12
Sample Calculations and Error Analysis…………………………………………...13
Discussion and Conclusion…………………………………………………………14
References…………………………………………………………………………..15
Appendix…………………………………………………………………………16-19
INTRODUCTION AND THEORY
In this experiment, a mass of lead in a crucible will be heated to its melting point and a transducer will be inserted in the lead. The heating is then ceased and the data of temperature versus time are accumulated by some data-acquisition system of your choice.
It is known from thermodynamics that when two bodies at different temperatures are in contact, heat will flow from the hotter body to the cooler one in a process known as Heat Transfer. The rate of this heat flow depends upon the temperature difference and thermal resistances in much the same way that electric current depends upon the potential difference (voltage) and electrical resistances. In solids, heat transfer occurs by molecular motion in a process called conduction, whereas in fluids, such as air and water, heat is transferred by fluid motion in a process called convection. In addition, heat is also transferred by electromagnetic radiation in transparent substances, or in a vacuum.
Consider a solid object in contact with air. If the surface temperature of the body, , is higher than the air temperature, , then there will be heat transferred from the object to the air. Newton proposed that the rate of this heat transfer, q, is proportional to the surface area of the object, A, and the temperature difference, :
(IV-1)
where the constant of proportionality, h, is called the heat transfer coefficient. Equation (IV-1) is known as the Newton’s rate equation.
In Newton’s time, the actual mechanism whereby this heat transfer occurred was not well understood. Today, however, it is known that heat transfer from a surface involves convection and radiation, and that these two mechanisms occur in parallel. As a result, the total rate of heat transfer from the surface, q, is the sum of the parts due to convection, , and radiation,. Thus, from Eq. (IV-1),
(IV-2)
where,
= convective heat transfer coefficient ...
An Adaptive Soft Calibration Technique for Thermocouples using Optimized ANNidescitation
Design of an adaptive soft calibration technique
for temperature measurement using Thermocouple by an
optimized Artificial Neural Network (ANN) is reported in this
paper. The objectives of the present work are: (i) to extend the
linearity range of measurement to 100% of full scale input
range, (ii) to make the measurement technique adaptive to
variations in temperature coefficients, and (iii) to achieve
objectives (i) and (ii) using an optimized neural network.
Optimized neural network model is designed with various
algorithms, and transfer functions of neuron considering a
particular scheme. The output of Thermocouple is of the order
of milli volts. It is converted to voltage by using a suitable data
conversion unit. A suitable optimized ANN is added in place of
conventional calibration circuit. ANN is trained, tested with
simulated data considering variations in temperature
coefficients. Results show that the proposed technique has
fulfilled the objectives.
This document summarizes a study on using the Gauss-Seidel numerical method to simulate heat transfer through a 3D rectangular bar. The author varied parameters like grid resolution, tolerance, and boundary conditions to determine an optimal standard approach. They found that a resolution of 21x21 nodes and a tolerance of 10^-5 provided accurate results within an acceptable runtime of around 1 minute. This standard approach produced smooth temperature distributions and could be applied to different boundary condition tests without excessive computation time.
1) A group of students designed and built a solar water heater for a class project with a $30 budget. They built a parabolic trough design out of coroplast and an emergency blanket.
2) Testing of the device found that it reached a temperature of 86°C, lower than the predicted 120°C due to wind and non-ideal conditions not accounted for in the model.
3) The students concluded that the experimental data was more reliable than the theoretical predictions, and that the project provided valuable experience with applying classroom concepts to real-world design constraints.
This document summarizes a student project analyzing the effectiveness of different fin designs on a CPU heatsink. It includes the design of a base heatsink model with 30 fins and details simulations varying the number of fins, fin height, and rotation speed. The results show that increasing the number of fins from 30 to 50 lowers the maximum surface temperature by 5.4°C, while further increases do not impact temperature as much.
This report analyzes the impact of relative humidity, cooling load, and wet bulb temperature on the energy efficiency of a chiller plant. It finds that wet bulb temperature is the main driver of chiller efficiency, while relative humidity most impacts cooling tower efficiency. A regression model is developed to optimize the approach temperature, which could save an estimated 4.78% of total monthly energy consumption if implemented. However, the model may not generalize to other plants due to differences in capacity and conditions.
ABSTRACTThe objective of this experiment is to understa.docxannetnash8266
ABSTRACT
The objective of this experiment is to understand the relationship between the change of temperature of an object and its surroundings. The Newtonian Cooling says that the temperature of an object is proportional to the temperature of the surrounding. The reason for the experiment is to make an experiment that measures temperature using a transducer of our own choice, understand the heat transfer and determine the coefficient of the heat transfer.
TABLE OF CONTENTS
Abstract ……………………………………………………………………………2
Table of Contents…………………………………………………………………..3
Introduction and Theory……………………………………………………….......4-9
Procedures………………………………………………………………………..10-11
Summary of Important Results…………………………………………………….12
Sample Calculations and Error Analysis…………………………………………...13
Discussion and Conclusion…………………………………………………………14
References…………………………………………………………………………..15
Appendix…………………………………………………………………………16-19
INTRODUCTION AND THEORY
In this experiment, a mass of lead in a crucible will be heated to its melting point and a transducer will be inserted in the lead. The heating is then ceased and the data of temperature versus time are accumulated by some data-acquisition system of your choice.
It is known from thermodynamics that when two bodies at different temperatures are in contact, heat will flow from the hotter body to the cooler one in a process known as Heat Transfer. The rate of this heat flow depends upon the temperature difference and thermal resistances in much the same way that electric current depends upon the potential difference (voltage) and electrical resistances. In solids, heat transfer occurs by molecular motion in a process called conduction, whereas in fluids, such as air and water, heat is transferred by fluid motion in a process called convection. In addition, heat is also transferred by electromagnetic radiation in transparent substances, or in a vacuum.
Consider a solid object in contact with air. If the surface temperature of the body, , is higher than the air temperature, , then there will be heat transferred from the object to the air. Newton proposed that the rate of this heat transfer, q, is proportional to the surface area of the object, A, and the temperature difference, :
(IV-1)
where the constant of proportionality, h, is called the heat transfer coefficient. Equation (IV-1) is known as the Newton’s rate equation.
In Newton’s time, the actual mechanism whereby this heat transfer occurred was not well understood. Today, however, it is known that heat transfer from a surface involves convection and radiation, and that these two mechanisms occur in parallel. As a result, the total rate of heat transfer from the surface, q, is the sum of the parts due to convection, , and radiation,. Thus, from Eq. (IV-1),
(IV-2)
where,
= convective heat transfer coefficient
= effective radiative heat transfer coefficien.
This lab manual document provides instructions for experiments on heat transfer in a Mechanical Engineering department. The first experiment listed is on heat transfer from a pin-fin apparatus. The objective is to calculate the heat transfer coefficient for natural and forced convection from a fin. The experiment involves measuring temperatures along a brass fin heated at one end while air passes over it naturally or in a duct. The second experiment listed is on heat transfer through a composite wall, and involves determining the total thermal resistance and conductivity of a wall made of different slab materials sandwiching a heater.
The document describes a 1-hour fire resistance test of an 8/C #12 AWG electrical cable installed in a concrete slab. Megger and resistance tests were performed on the cable before, during, and after exposure to the ASTM E119 time-temperature heating curve for 102 minutes. All cable supports remained firmly attached and the cable met insulation resistance acceptance criteria. Following furnace exposure, the cable passed a 5-minute hose stream test without issues.
This document discusses using thermally conductive plastic housings for industrial control electronics like PLCs and power supplies. Finite element modeling and testing of materials from different suppliers found that a prototype material from GEP performed better than other options at dissipating heat. Samples of housings molded from the GEP material showed lower steady-state temperature readings when tested with instrumented I/O modules, verifying its improved thermal conductivity over conventional plastics. Further verification is still needed using a housing designed specifically for the GEP material.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Jom first guess upgrade in min temp tool (jan 2015)
1. 1
Optimization of the 45th Weather Squadron’s ‘First Guess’1
Minimum Temperature Prediction Equation2
3
JAMES S. BROWNLEE4
Florida Institute of Technology, Melbourne, Florida5
6
WILLIAM P. ROEDER7
45th
Weather Squadron, Patrick AFB, Florida8
9
ABSTRACT10
An upgrade was made to the 45th Weather Squadron’s (45 WS) Minimum Temperature tool.11
This update was desired since the initial 45 WS minimum temperature tool contained several12
elements that had been tuned subjectively. More importantly, there was a change in 45 WS13
operational requirements for minimum temperatures advisories to significantly colder14
temperatures. The previous warmest low temperature advisory was ≤ 60F. After the end of the15
Space Shuttle Program in 2011,the warmest 45 WS temperature advisory became ≤ 35 F. Since16
the post-Space Shuttle temperature advisories represented a significantly colder regime, a re-17
optimized algorithm was desired. The 45 WS minimum temperature tool consists of a ‘first18
guess’based on the 1000-850 mb thickness and correction factors for various local19
meteorological effects. In this project, the ‘first guess’equation was re-optimized and represents20
a substantial improvement over the previous equation. This re-optimized ‘first guess’ equation is21
the first and most important step for upgrading the entire low temperature tool.22
23
1. Introduction24
25
The 45th
Weather Squadron (45 WS) provides weather support for the Cape CanaveralAir Force26
Station (CCAFS), NASA’s Kennedy Space Center (KSC),and Patrick Air Force Base (PAFB) (Roeder et27
al. 2005). Most of the support provided by the 45 WS is for operations at KSC and CCAFS that includes28
space launches, preparation for space launches, personnel safety,and resource protection. One of the29
many support functions of the 45 WS are the low temperature advisories, which are listed in Table 1. The30
minimum temperature advisories are the most frequently issued warning, watch,or advisory product31
issued by the 45 WS during the winter months (Roeder et al. 2005). These minimum temperature32
advisories are critical because if the temperature gets too low, icing damage can occur to refrigerated lines33
exposed to the outdoors at various facilities (Roeder et al. 2005).34
The minimum temperature tool used by the 45 WS needed to be updated because the prior tool was35
developed to include Space Shuttle operations. During the Space Shuttle Program, the 45 WS was36
2. responsible for temperature advisories of ≤ 60F. When the program ended in 2011, the 45 WS low37
temperature advisories changed. The warmest low temperature advisory became ≤ 35F. As a result of38
this colder temperature regime, an update to the minimum temperature algorithm was needed.39
The current minimum temperature forecast tool in use by the 45 WS uses the 1000 mb to 850 mb40
thickness to make a ‘first guess’ minimum temperature forecast. This minimum temperature is predicted41
through the use of a linear regression equation (Roeder et al. 2005). This method of using thickness42
values for predicting both minimum and maximum temperatures has been utilized at many different43
forecasting centers (Struthwolf 1995; Massie and Rose 1997; Rose 2000), and many of these forecasting44
techniques utilize linear regression equations (Massie and Rose 1997; Rose 2000). In a similar manner to45
Rose (2000), the forecasted temperature at the 45 WS is further modified by severalcorrection factors to46
incorporate local effects. These localeffects are wind speed, cloud cover, wind direction, nocturnal47
inversion, dew point, boundary layer humidity, and mid-level humidity. After the correction factors are48
applied, the final expected minimum temperature is provided as guidance to the forecaster. This49
minimum temperature algorithm is shown in Fig. 1. This ‘first guess’ temperature prediction is the most50
critical part of the forecast,if this number is significantly in error, then the entire temperature forecast is51
wrong. A new re-optimized linear ‘first guess’ temperature prediction equation was produced by this52
research project.53
54
2. Data and methods55
56
The previous ‘first guess’ equation is a linear regression equation that uses the 1000-850 mb thickness57
to predict the ‘first guess’ minimum temperature. A linear equation has the following form:58
bmxy 159
3. The slope ‘m’ and intercept ‘b’ were previously optimized by linear regression by the 14th
Weather60
Squadron (14 WS), the Air Force climatology center,using radiosonde and temperature data from the 4561
WS tower network with temperatures ≤ 60F. The previous operational ‘first guess’ linear regression62
equation is shown below:63
32
5
9
*15.273592.15*1979.0 8501000
mbThicknessFMinTemp 264
This previous ‘first guess’ equation was created by the 14 WS in 2004 at the request of the 45 WS. The 4565
WS then further refined this linear equation by ‘regression through the origin’, adjusting temperatures to66
Kelvin. The optimization of the new ‘first guess’ began with data provided by the 14 WS. These data67
included the following for all days where 45F was observed at any of the 45 WS weather towers,or68
surface observations at the KSC Shuttle Landing Facility (KTTS) or the CCAFS Skid Strip (KXMR); the69
1000-850 mb thickness nearest in time to the lowest temperature, and all surface observations at KTTS70
from 2-hr after sunset before the lowest temperature to 1-hr after sunrise after the lowest temperature. The71
data for these “cold events” (≤ 45F) were for Jan 1986-Apr 2014. The new ‘first guess’ was optimized72
using the data from 1986-2009, while 2010-2014 data were used for independent verification. The sample73
size for each of these partitions is listed in Table 2. Even though the warmest threshold for the 45 WS74
advisories is ≤ 35F, the threshold of ≤ 45F for cold events was chosen, based on the frequency of75
occurrence for CCAFS/KSC,to ensure a large enough sample size for the optimization. In addition, this76
ensures that most of the events are for cold front passages,which are the primary mechanism for the77
colder events at CCAFS/KSC. This also allows a margin for the forecaster’s guidance as the temperatures78
begin to approach the warmest advisory threshold.79
The new linear ‘first guess’ equation was optimized using two different methods. The first method80
involved using the ‘Solver Tool’ in EXCEL. The ‘Solver Tool’ in the EXCEL spreadsheet optimized the81
slope and intercept of the previous ‘first guess’ equation by minimizing Root Mean Square Error (RMSE)82
4. of the previous equation over a certain number of iterations. After the optimization was complete using83
the ‘Solver Tool’, the previous ‘first guess’ equation became the following new equation:84
32
5
9
*15.27315.228*0371.0 8501000
mbThicknessFMinTemp 385
The second version of the new ‘first guess’ equation was created using the ‘Trend Line’ linear regression86
function in EXCEL. This second equation is shown below:87
4.92*0091.0 8501000
mbThicknessFMinTemp 488
Even though they appear quite different, Equation 3 is virtually identical to Equation 4. Both of these89
equations calculate the ‘first guess’ temperature in Fahrenheit. However,the ‘Solver Tool’ equation was90
an adaptation of the previous ‘first guess’ equation that solves for the temperature in Kelvin and then91
converts it to Fahrenheit. The ‘Trend Line’ equation solves for the low temperature in Fahrenheit directly.92
Unlike Equation 3, the ‘Trend Line’ equation is an analytical solution. As expected, the 'Solver Tool'93
solution converged to the solution from the least squares linear regression as provided by the EXCEL94
'Trend Line' function. Indeed the least squares 'Tread Line' linear regression solution and the 'Solver95
Tool' solution both have the same correlation coefficient (r2
= 0.2459), and the average error between the96
two solutions is only 0.11F over the 1986-2014 data set. A t-test shows they are the same solution at the97
99.99992% significance level. Presumably, if more iterations of the 'Solver Tool' solution had been98
conducted, its solution would have become even closer to the least squares linear regression solution.99
After the optimization of the linear equation was finished, the bias and RMSE were calculated for the new100
‘first guess’ equation. Since the linear regression in Equation 4 is statistically optimized, it is the preferred101
solution, even though Equation 3 is very similar.102
In the data there were six days when the predicted temperatures were exceptionally high. This was103
due to the large thickness values reported on each of those days. These large thickness values resulted in104
unrealistically high predicted temperatures,and as a result, the errors between the observed temperatures105
and predicted temperatures for these six events were very high; these six data points were considered106
5. erroneous outliers and removed from the data set. By removing these outlier points, a more realistic107
RMSE and bias could be achieved.108
Alternate regressions were also considered. The previous 45 WS minimum temperature tool found a109
slight performance improvement using a ‘regression through the origin’ with the 1000-850 mb thickness110
and the minimum temperatures in Kelvin. ‘Regression through the origin’ is justified a priori since the111
hypsometric equation would predict zero thickness at zero absolute temperature. With the new data set in112
this study, the ‘regression through the origin’ was also slightly better than the normal linear regression.113
However,the improvement was not statistically significant and so was not selected for operational use.114
In the original upgrade to the 45 WS minimum temperature tool in 2004, the ‘first guess’ based on the115
1000-850 mb thickness performed much better than the 1000-500 mb thickness ‘first guess’, which was116
replaced at that time. This made good meteorological sense since the cold events are mostly due to arctic117
outbreaks, which are much shallower than 500 mb. In this project, the possibility that the arctic layer is118
so shallow that its top is closer to 925 mb than 850 mb was also considered. Others have found the 925119
mb thickness to be useful in predicting low temperatures (Rose 2000). However,a ‘first guess’ based on120
the 1000-925 mb thickness did not perform quite as well as the 1000-850 mb thickness, even after three121
outliers were eliminated. Therefore,a 1000-925 mb ‘first guess’ was not selected. The possibility that122
the 1000-925 mb thickness might work better than the 1000-850 mb thickness for colder events was also123
considered. A 1000-925 mb ‘first guess’ for minimum temperatures ≤ 36F was found to perform slightly124
worse than the 1000-850 mb ‘first guess’. Thus this potential two-tiered ‘fist guess’ was not selected,125
where the 1000-925 mb thickness would be used at the lower temperatures and the 1000-850 mb126
thickness would be used at the warmer temperatures below 45F. Likewise, the 1000-925 mb thickness127
was considered for minimum temperatures from ≤ 45F to > 36F performed slightly worse than the128
1000-850 mb thickness. Therefore,the final result is to use the 1000-850 mb thickness ‘first guess’129
discussed previously.130
6. The same temperature stratification used in the 1000-925 mb regressions was also applied to the131
1000-850 mb regression. However,neither the ≤ 36F nor the ≤ 45F to > 36F regressions using the132
1000-850 mb thicknesses were statistically significantly better than the non-stratified 1000-850 mb133
regression. The plot of the colder temperature stratification suggested a 1000-850 mb regression through134
the origin with temperatures in Kelvin might be advantageous. However,this regression was not135
statistically superior to the overall 1000-850 mb thickness regression. As a result, none of these alternate136
1000-850 mb regressions were selected. Despite the severalalternate regressions considered, none137
showed a statistically significantly benefit over the 1000-850 mb thickness regression. Therefore,the final138
result is to use the 1000-850 mb thickness ‘first guess’ discussed previously and shown in Equation 4.139
140
141
3. Analysisand discussion142
143
a. Comparison of the Accuracy of the New Linear ‘First Guess’ Equation and the Previous Equation144
145
Table 3 compares the RMSE and bias for the previous and new ‘first guess’ equations for the146
development (1986-2009) and verification (2010-2014) with the six outlier 1000-850 mb thicknesses147
excluded. Table 4 compares the bias for the previous and new ‘first guess’ equations for the same time148
periods. The new ‘first guess’ equation has a RMSE of 4.83F on independent data,compared to the149
RMSE of 11.74F in the original equation, an 59% improvement. The new ‘first guess’ has a bias of150
1.31F on independent data, compared to the bias of 8.22F in the original equation, an 84%151
improvement. The bias indicates that the new ‘first guess’ still tends to over-forecast slightly. The RMSE152
is the typical expected magnitude of error for individual forecasts,regardless of polarity, i.e. ±5F. The153
bias is the average error over many forecasts,where the individual ± errors tend to cancelout each other.154
7. Individual errors of ~5F may not appear to be good performance, but recall that this is just for the ‘first155
guess’; the correction factors will further reduce the error for the entire tool.156
Figure 2 shows the linear correlation between the observed minimum temperatures and the observed157
1000-850 mb thickness values. From Fig. 2, it is evident that the linear relationship between these two158
parameters is rather weak. According to the correlation coefficient in Fig. 2, the linear regression line,159
which is Equation 4, only explains 25% (r2
= 0.2459) of the variance. However,the method is still useful160
as a ‘first guess’ given the previous discussion on the ‘first guess’s’ improved performance at predicting161
low temperatures. However,it also shows the need for the correction factors to refine the forecast,and the162
eventual goal of this project is to optimize all of the correction factors which are listed in Fig. 1.163
Figure 3 compares the temperature predictions made by the previous ‘first guess’ equation and the164
recorded low temperatures that occurred during each cold event from 1986-2014. Figure 4 compares the165
temperature predictions made by the new linear equation, and the observed low temperatures that166
occurred on each cold event day during the same time period. These two figures clearly show that the new167
equation’s temperature predictions are much more accurate than the previous equation’s predictions168
Figures 5 and 6 compare the low temperature prediction accuracy of the previous and new linear169
equations for all cold days which occurred during the independent verification period (2010 to 2014). It is170
interesting to see how well the new equation can handle predicting temperatures that occur during171
extreme cold air outbreaks, and a series of such outbreaks occurred at CCAFS/KSC during the first few172
months of 2010. During that year and for the rest of the selected time period, the previous linear equation173
had considerable difficulty in predicting the minimum temperatures for each day. On almost every cold174
event day, the previous equation predicted temperatures which were higher than the observed minimum175
temperatures. From both of these figures, it is quite clear that the new equation made more accurate176
temperature predictions. It should be noted that in Figs. 4 and 6, there are some events when the new177
equation slightly under predicted the observed low temperatures. Overall, though, Figs. 4 and 6 show that178
in most cases, the new equation made fairly accurate temperature predictions.179
8. As a further test of the new equation’s performance a z-test was performed which showed that the180
bias of the new ‘first guess’ was not statistically significantly different than zero at the 12.75%181
significance level, i.e., the new technique appears to be unbiased. However,the RMSE is statistically182
significantly different than zero at the 1.03 x 10-200
% significant level, thus the ‘first guess’ is not a perfect183
predictor of the minimum temperatures. This latter result reinforces the need for the correction factors in184
the Minimum Temperature Tool to incorporate local effects and refine the final prediction. Overall185
though, it is quite evident that the new equation does a much better job than the previous equation at186
predicting low temperatures during cold events at CCAFS/KSC.187
188
b. Reasonsforthenew equationsincreased accuracy189
190
The new ‘first guess’ equation is much better at predicting cold events at CCAFS/KSC than the191
previous operational equation. One significant reason for this is that the old equation was optimized for192
days when the low temperature was ≤ 60F. The data used to construct the previous linear ‘first guess’193
equation contained temperatures as high as 60F. Climatologically, there are many more days with194
minimum temperatures in the 60-45F range than ≤ 45F, so the previous ‘first guess’ equation may have195
been overly tuned to the warmer range of the previous low temperature advisories. Since the previous196
linear regression equation was fitted for a data set which included low temperatures that high, the197
equation is not as useful in predicting much colder temperatures; the old equation has a warm bias. This198
warm bias is responsible for most of the larger RMSE and bias values that occurred when using the old199
operational equation. As a result of this warm bias, a new ‘first guess’ equation was needed; an equation200
constructed using colder temperatures. Since this new equation has been tuned with much colder201
temperatures the equation makes temperature forecasts that better match the 45 WS’s new temperature202
advisory regime of ≤ 35F.203
9. 204
c. Other work205
206
As mentioned earlier, the 45 WS minimum temperature tool consists of a ‘first guess’ based on the207
1000-850 mb thickness and seven correction factors (Roeder et al. 2005). These correction factors208
consider wind speed, clouds, nocturnal inversion, dew point, on-shore/off-shore flow, low altitude209
humidity, and mid-altitude humidity. Most of these correction factors were tuned subjectively and could210
be improved by objective optimization. The wind speed correction factor was briefly examined in the211
current project reported in this paper. This correction factor appeared to be working well and no further212
work was done to optimize this correction factor to concentrate resources on optimizing the ‘first guess’,213
which had much more room for improvement and had more impact of the performance of the minimum214
temperature tool.215
The 45 WS is currently working with a student at the Florida Institute of Technology to optimize the216
cloud correction factor. The ‘first guess’ equation might show even better performance if based on the217
1000-925 mb thickness since the new colder advisories are mostly due to arctic air mass outbreaks that218
are relatively shallow. The remaining correction factors in the 45 WS minimum temperature tool should219
be objectively optimized in the future.220
221
4. Conclusions222
223
In this project, the optimization of the linear ‘first guess’ equation was performed. From the analysis,224
it was shown that the new optimized linear ‘first guess’ equation is superior to the old operational225
equation. The results showed that during all recorded major cold events that occurred in East Central226
Florida from 1986 to 2014, the new linear equation made more accurate low temperature predictions than227
10. the old equation. In addition to that, the new equation made low temperature forecasts that are in line with228
the new low temperature advisories. This increased accuracy is reflected in the observed reduction of both229
the RMSE and bias values. Much of the larger RMSE and bias that occurred with the old operational230
equation was due to the warm bias of that particular equation, and thus that equation is not useful with the231
new low temperature advisory criteria. In closing, it is recommended that the new linear ‘first guess’232
equation be used in place of the previous linear ‘first guess’ equation. Another option would be to use the233
new linear equation during very strong cold air outbreaks and use the previous equation during less severe234
cold air outbreaks. Either way, the results of this analysis show that the new linear ‘first guess’ equation is235
a major and much needed first step in updating the 45 WS’s low temperature tool.236
Acknowledgments. The 14th Weather Squadron, the U.S. Air Force climate center, provided the data237
CCAFS/KSC weather data used in this study.238
239
240
REFERENCES241
Massie, D. R. and M. A. Rose, 1997: Predicting Daily Maximum Temperatures Using Linear Regression and Eta242
Geopotential Thickness Forecasts. Wea. Forecasting, 12, 799–807.243
Roeder, W. P., McAleenan, M., Taylor, T. N., and T. L. Longmire, 2005: Applied Climatology In The Upgraded244
Minimum Temperature Prediction Tool For The Cape Canaveral Air Force Station and Kennedy Space Center, 15th245
Conference on Applied Climatology, 20-23 Jun 2005, 7 pp.246
Rose, M., 2000: Using 1000-925 mb Thicknesses in Forecasting Minimum Temperatures at Nashville, Tennessee.247
Technical Attachment SR/SSD 2000-25.248
Struthwolf, M. E. 1995: Forecasting Maximum Temperatures through Use of an Adjusted 850- to 700-mb249
Thickness Technique. Wea. Forecasting,10, 160–171.250
TABLES AND FIGURES251
11. Table 1. Cold temperature advisories provided by 45 WS.252
Temperature Threshold Duration Desired Lead-time
≤ 35F any occurrence 4 hr
≤ 32F ≥ 4 hr 16 hr
≤ 28F any occurrence (if wind > 10 kt) 16 hr
253
254
255
Table 2. Partitioning of the cold weather events (≤ 45F) at CCAFS/KSC in optimizing the 45 WS256
minimum temperature tool (6 outliers removed).257
Time Period Description Number of Events Percent of Events
Jan 1986-Apr 2014 All Data 595 100%
Jan 1986-Dec 2009 Development Data 476 80%
Jan 2010-Apr 2014 Independent Verification 119 20%
258
Table 3. RMSE for the previous and new ‘first guess’ equations for all cold events (≤ 45F) at259
CCAFS/KSC. The bias values were calculated using Equation 4. The six outlier 1000-850 mb thicknesses260
were excluded in these calculations.261
262
Time Period RMSE of previous ‘First Guess’ Equation(F) RMSEof new ‘First Guess’ Equation (F)
Jan 1986–Dec 2009 8.87 3.47
Jan 2010-Apr 2014 11.74 4.83
263
264
265
Table 4. Bias for the previous and new ‘first guess’ equations for all cold events (≤ 45F) at266
CCAFS/KSC. The bias values were calculated using Equation 4. The six outlier 1000-850 mb thicknesses267
were excluded in these calculations.268
269
Time Period Bias of previous ‘First Guess’ Equation(F) Bias of new ‘First Guess’ Equation (F)
Jan 1986-Dec 2009 6.35 0.03
Jan 2010-Apr 2014 8.22 1.31
270
271
272
273
12. Figure 1. Schematic of the minimum temperature algorithm used by the 45 WS to make low temperature274
forecasts.275
276
277
Figure 2. Linear regression of the observed minimum temperatures and the observed thickness values.278
279
280
281
13. 282
Figure 3. Days on which the observed low temperature reached 45F or less from 1986 to 2014 (black283
line) along with the low temperature predicted for each day by the old operational ‘first guess’ linear284
equation (red line). The six outliers were removed here.285
286
287
288
14. 289
Figure 4.Days on which the observed low temperature reached 45F or less from 1986 to 2014 (black290
line) along with the low temperature predicted for each day by the new ‘first guess’ linear equation (red291
line). The six outliers were removed here.292
293
15. 294
Figure 5. Days on which the observed low temperature reached 45F or less from 2000 to 2014 (black295
line) along with the low temperature predicted for each day by the old operational ‘first guess’ linear296
equation (red line).297
298
16. 299
Figure 6.Days on which the observed low temperature reached 45F or less from 2000 to 2014 (black300
line), and the low temperature predicted for each day by the new ‘first guess’ linear equation (red line).301
302
303