A Telescope Tracking and Thermal Imaging System for High-Speed Vehicles

By Matthew Charles Bastow B.S. in Mechanical Engineering, May 2001, Rowan University

A Thesis submitted to The faculty of The School of Engineering and Applied Science of The George Washington University in partial satisfaction of the requirements for the degree of Master of Science

January 30, 2004

This research was conducted at NASA-Langley Research Center

Abstract

A ground-based thermography system for imaging in-flight, high-speed vehicles is presented. A quantitative, non-intrusive, temperature measurement technique known as ratio-intensity-thermography is discussed. A specific ratio-intensity-thermographysystem is presented for this application and a proven calibration procedure is discussed. The primary goal of this work is to deliver a new, more accurate and user friendly, portable, infrared imager at a fraction of the cost of current systems. The work presented involves final assembly of the individual components, systems integration, hardware/software integration, packaging, operation, and testing of a complete turnkey system. The packaging of the system is designed for ease of transport, set-up, and operation. The entire system is contained in a total of five rugged shipping cases, each weighing less than 150 pounds gross weight. A system and method for tracking an in-flight vehicle from a ground location is presented. This system uses high quality conventional optics and a high-speed tracking mount to latch onto the target vehicle and center it in the field-of-view of a near-infraredcamera connected to an 11-inch telescope. Images are acquired at two wavelengths in the near-infrared spectrum and used to determine the absolute surface temperature of the target vehicle.

ii

Acknowledgements

There are several individuals who deserve recognition for their contributions to this research. First of all, I would like to acknowledge Dr. Leonard Weinstein a senior scientist of the Advanced Sensing and Optical Measurement Branch of NASA-Langley Research Center. This work was the result of his idea, and it would not have been possible without his creativity, patience, and persistence. I would also like to thank the other researchers of the Advanced Sensing and Optical Measurement Branch for their various advice and support throughout the completion of this work, especially Robert Marshall for his invaluable assistance with the software development. I must also acknowledge Jason Hundley, currently of Northrop Grumman Corporation; this research was built on the research that he previously completed. I would also like to acknowledge Dr. Michael Myers of the George Washington University. His experienced advising skills have helped to shape and polish this thesis.

iii

Table of Contents

Abstract ............................................................................................................................... ii Acknowledgements............................................................................................................ iii Table of Contents............................................................................................................... iv List of Figures ................................................................................................................... vii List of Symbols and Acronyms.......................................................................................... ix 1

Introduction............................................................................................................. 1

2

Radiation ................................................................................................................. 9

3

4

2.1

Introduction..................................................................................................... 9

2.2

Radiation Basics............................................................................................ 10

2.3

Blackbody ..................................................................................................... 13

2.4

Ratio-Intensity-Thermography ..................................................................... 18

2.5

Irradiance vs. Emissive Power...................................................................... 21

System Components.............................................................................................. 24 3.1

RITS Components......................................................................................... 24

3.2

Tracking Components................................................................................... 30

3.3

Computer System.......................................................................................... 32

3.4

Displays......................................................................................................... 33

RITS Calibration................................................................................................... 35 4.1

Non-Uniformity Correction .......................................................................... 35

4.2

Detector Characterization ............................................................................. 36

4.3

Laboratory Calibration.................................................................................. 38 iv

5

6

4.4

Optical Efficiency Characterization.............................................................. 42

4.5

Verification ................................................................................................... 44

Packaging.............................................................................................................. 45 5.1

Case 1............................................................................................................ 45

5.2

Case 2............................................................................................................ 46

5.3

Case 3............................................................................................................ 47

5.4

Case 4............................................................................................................ 49

5.5

Case 5............................................................................................................ 50

Operation............................................................................................................... 52 6.1

Set-up ............................................................................................................ 52

6.2

Tracking System ........................................................................................... 53

6.2.1

Manual Control ..................................................................................... 53

6.2.2

Computer Control ................................................................................. 55

6.2.3

Software Set-up..................................................................................... 56

6.2.4

Tracking Procedure............................................................................... 62

6.3

7

Data Acquisition System............................................................................... 66

6.3.1

Overview............................................................................................... 66

6.3.2

Operation............................................................................................... 68

System Testing...................................................................................................... 70 7.1

Laboratory Testing........................................................................................ 70

7.2

Field-Testing ................................................................................................. 71

7.2.1

Single-Camera Tracking ....................................................................... 71

7.2.2

Two-camera Tracking........................................................................... 75

v

8

Concluding Remarks............................................................................................. 82 8.1

Objectives of this Research........................................................................... 82

8.2

Alternate Applications and Future Work...................................................... 84

References......................................................................................................................... 87

vi

List of Figures

Figure 2-1: Electromagnetic Spectrum ............................................................................. 11 Figure 2-2: Plank's Law .................................................................................................... 14 Figure 2-3: 1.3 Micron Filter Transmission Curve........................................................... 17 Figure 2-4: 1.55 Micron Filter Transmission Curve......................................................... 17 Figure 2-5: Emissive Power Distribution ......................................................................... 18 Figure 2-6: Theoretical Ratio Response ........................................................................... 20 Figure 2-7: Ratio Response............................................................................................... 21 Figure 3-1: RITS Components.......................................................................................... 25 Figure 3-2: Barlow Lens ................................................................................................... 27 Figure 3-3: Filter Wheel with MVP 2001......................................................................... 29 Figure 3-4: Display Rack .................................................................................................. 34 Figure 4-1: Merlin Re-Mapping Equation ........................................................................ 37 Figure 4-2: Merlin Re-Mapping Function ........................................................................ 38 Figure 4-3: Individual Offset Coefficients........................................................................ 39 Figure 4-4: Temperature-Ratio Response......................................................................... 40 Figure 4-5: Ratio-Temperature Equations ........................................................................ 41 Figure 4-6: Optical Efficiency Factors ............................................................................. 44 Figure 5-1: Case 1............................................................................................................. 46 Figure 5-2: Case 2............................................................................................................. 47 Figure 5-3: Case 3-Lower Level ....................................................................................... 48 Figure 5-4: Case 3-Upper Level ....................................................................................... 49

vii

Figure 6-1: HARVII.......................................................................................................... 53 Figure 6-2: Tracking Mount Control Pad ......................................................................... 54 Figure 6-3: C-Sat Main Window ...................................................................................... 56 Figure 6-4: C-Sat Telescope Setup Window .................................................................... 57 Figure 6-5: C-Sat Video Settings Window ....................................................................... 58 Figure 6-6: C-Sat Tracking Settings Window .................................................................. 59 Figure 6-7: C-Sat Feedback Settings Window.................................................................. 60 Figure 6-8: C-Sat Controls................................................................................................ 64 Figure 6-9: Filter Wheel.................................................................................................... 67 Figure 6-10: Image Acquisition Program ......................................................................... 69 Figure 7-1: Infrared FOV nested inside tracking FOV..................................................... 72 Figure 7-2: Infrared FOV with respect to the pixels of the tracking camera.................... 73 Figure 7-3: Relative FOV's for the tracking and thermal cameras ................................... 76 Figure 7-4: Orion FOV with respect to the pixels of the acquisition camera ................... 77 Figure 7-5: NIR FOV with respect to the pixels of the Orion camera.............................. 77 Figure 7-6: Boeing 747 ..................................................................................................... 79 Figure 7-7: Boeing 737 ..................................................................................................... 79 Figure 7-8: Boeing 737 ..................................................................................................... 80 Figure 7-9: Boeing 737 ..................................................................................................... 80

viii

List of Symbols and Acronyms

α

Absorptivity

α

Directional characteristic of radiation

Γ

Attenuation coefficient

ε

Emissivity

η

Surface reflection coefficient

λ

Wavelength

µ

Micron

ν

Frequency

ρ

Reflectivity

σ

Stephan-Boltzman constant

τ

Tranmissivity

ω

Dispersion coefficient

°

Degrees



Infinity

A

Amplification factor

ACL

Astronomy Command Language

Alt

Altitude

Atm

Atmospheric absorption factor

Az

Azimuth

Bit

Binary digit ix

BMDO

Ballistic Missile Defense Organization

c

Speed of light in a vacuum

CCD

Charge Coupled Device

ep

Energy associated with a photon

F

Fahrenheit

FB

Focal length of Barlow lens

FOV

Field-of-View

FPA

Focal Plane Array

ft

feet

Gb

Gigabyte

GHz

GigaHertz

h

Plank’s constant

HARVII

High Altitude/Reentry Vehicle Infrared Imager

I

Intensity

IDE

Integrated Drive Electronics

InGaAs

Indium Gallium Arsenide

InSb

Indium Antimonide

IR

Infrared

ISAFE

Infrared Sensing Aeroheating Flight Experiment

ISTEF

Innovative Sciences and Technology Experimentation Facility

J

Joule

K

Kelvin

k

Boltzman’s constant

x

LCD

Liquid Crystal Display

LED

Light Emitting Diode

m

meter

Mb

Megabyte

mm

millimeter

mph

miles per hour

ms

milliseconds

NASA

National Aeronautics and Space Administration

nit

Unit of brightness

NIR

Near Infrared

NUC

Non-Uniformity Correction

QE

Quantum Efficiency

RITS

Ratio Intensity Thermography System

RLV

Reusable Launch Vehicle

RPM

Revolutions per Minute

s

Physical state

SCSI

Small Computer Systems Interface

sec

Second

T

Temperature

t

Integration time

TLE

Two-Line Orbital Element

TPS

Thermal Protection System

W

Watt

xi

1

Introduction

The space shuttle begins reentry into the Earth’s atmosphere at a speed of approximately 17,500 mph and an altitude of 400,000 ft. As a consequence of this speed, there is an enormous amount of kinetic energy that must be dissipated during the decent. Most of this energy is dissipated in the form of heat. The kinetic energy is converted to thermal energy in several ways. The atmospheric gas crossing the shock wave formed in front of the vehicle is heated by compression. Additional heat is generated by frictional forces in the viscous boundary layer adjacent to the vehicle surface. While most of the heat generated goes into the air around the shuttle and is lost in the vehicle’s wake, about 1/1000 of the heat is transferred to the surface of the vehicle. This results in parts of the orbiter near the leading edges heating up to 3000° F. Consequently, a reusable launch vehicle (RLV) must be equipped with a thermal protection system (TPS) to survive the atmospheric effects of reentry.

The aerodynamics and aeroheating disciplines are the primary considerations in vehicle design and provide critical inputs for other disciplines such as control system design, materials, and structures. Aeroheating and aerodynamic data determine the material selection for the TPS. The designers must choose the material that provides the minimum weight while still ensuring survivability and reusability. With highly accurate, validated tools for determining the in-flight aerothermodynamic characteristics, designers can optimize the design of future reusable launch vehicles and enhance U.S. space launch competitiveness [1].

1

The United States Air Force, National Aeronautics and Space Administration (NASA), and a Congressional Commission have all indicated that next generation reusable launch systems are needed within the next few years. NASA has recently awarded a 66-month, $150 million contract to a private aerospace company to provide three flight-ready demonstrator vehicles that will fly at seven times the speed of sound. The TPS size and type will have a significant effect on the vehicle’s weight and operability. The TPS design will also limit some of the maneuvers that the vehicle will be able to perform [2].

Aerospace designers and measurement technologists are also often faced with challenges of the test environment, where they are always trying to simulate, as closely as practical, the effects of the full-scale flight regime. To meet this challenge, there is a push to make measurements non-intrusively to avoid having to correct for the measurement process itself in data reduction. There is also an ever-increasing push for higher resolution and higher accuracy in measured data. Also, because of increasingly expensive testing programs, there is a growing demand for global or wide-area measurements, which can result in reduced test times and energy or other resource savings. This requirement is often met using non-intrusive techniques such as IR thermography. Thermal images obtained through IR thermography can provide a global surface temperature map without having any effect on the object being measured. Also, accurate, high-speed, high-resolution digital cameras now available, which in combination with appropriate computer algorithms, can provide very precise temperature data at standard video rates or faster [3].

2

The basic principle behind IR thermography is the measuring of surface emissions in the IR radiation band that are directly related to surface temperature. Great strides are being made in the application of infrared imaging to the measurement of surface temperature, with extensions being made to the determination of transition in the boundary layer. Surface shear stress and, thereby, convective heat transfer with the freestream varies with the boundary layer state. This difference in convective heat transfer results in a temperature gradient on the surface where the boundary layer state changes, such as at transition. These characteristics make IR thermography a very powerful tool to visualize certain flow phenomena. In addition to transition, any flow phenomenon that creates measurable temperature changes can be visualized. These include shock waves and flow separation [4].

Several different methods have been developed to make IR aerodynamic measurements of vehicles in their natural flight environment. Many in-flight experiments have been conducted using cameras that were mounted directly to the vehicle being measured. This type of experiment requires significant modifications to the flight vehicle and can be very costly and time consuming. These measurements do not give a global view of the vehicle, and the measurement equipment can have an effect on the flow past the vehicle leading to misleading results. Other in-flight measurements have been obtained using a chase vehicle. The chase vehicle is equipped with the measurement device and flown along with the vehicle being measured. This type of measurement is

3

not practical for vehicles flying at very high altitudes, and the extreme conditions encountered during reentry pose additional dangers to the vehicle taking the data. This method also has the increased cost of operating the chase vehicle. An alternative to the previously mentioned methods is a ground-based measurement system. Ground-based systems can be strategically located to track and image in-flight vehicles during normal operations. These types of measurements are inherently non-intrusive, and require no modifications to the test vehicle. They also are typically safer and less expensive to conduct than other types of measurements. A current ground-based system is operated at the Ballistic Missile Defense Organization\Innovative Sciences and Technology Experimentation Facility (BMDO\ISTEF) at NASA Kennedy Space Center. It has acquired reentry, aeroheating data for the Infrared Sensing Aeroheating Flight Experiment (ISAFE). Images were collected during the re-entries of the STS-96 and STS-103 using one of their optical pedestal mounts. These mounts are currently the smallest and most portable mounts in use; however, they require their own 18-wheeled vehicle for transportation. The mount’s control system, communications equipment, and data acquisition systems also require the space of the mobile trailer. Extensive software and communication links are required for acquiring, tracking, and collecting flight data [5]. The ISAFE set-up collects data in the midwave (3-5 µ) infrared spectrum using a 24-inch infrared telescope and a 256 x 256 indium antimonide (InSb) detector. Τhe infrared calibrated temperatures obtained during ISAFE are dependent on knowing different properties of the orbiter itself. These include the different surface emissivities of the various thermal protection materials found on the surface of the vehicle. An in-situ

4

calibration technique has been developed, which requires information from embedded thermocouples on the shuttles surface. The direct infrared temperature data has been compared to in-situ temperature data from embedded thermocouples on the orbiter and shown to have an error up to 5% [6]. A current non-intrusive method for measuring global aeroheating data in wind tunnels is the two-color relative intensity, phosphor thermography technique. This technique requires coating the test model with phosphors, which fluoresce in two regions of the visible spectrum when illuminated with ultra-violet light. The accuracy of this technique is dependent on the temperature rise on the surface of the test model. The best accuracy of the phosphor system is estimated at +/- 8%, and depending on the characteristics of the test model, this method can have an uncertainty as high as +/-25% [7]. This method, however, is not practical for use on full-scale reentry vehicles because it must be conducted in a controlled wind tunnel environment. Also, a typical reentry vehicle could not be covered with the phosphor coating and properly illuminated with only ultra-violet light during normal flight operations.

Aerothermodynamicists at NASA-Langley Research Center have indicated a need for a new infrared imaging system. The researchers are interested in examining the surface temperature profile on the shuttle’s surface as the surrounding flow experiences boundary layer transition. The vehicle surface temperature during the transition period is typically in the range of 1300° - 1600° F [6]. The system must be able to accurately track and measure the global surface temperature of an in-flight reusable launch vehicle (RLV) during reentry into the Earth’s atmosphere. The system must be completely portable and

5

simple to use so that it can be transported to various test sites and set-up and operated by no more than two users.

The purpose behind this current research is to take a proven temperature measurement technique along with multiple pieces of hardware and turn them into a userfriendly, operational, thermal imaging and tracking system. The final system is a combination of two independent subsystems: a telescope tracking system and an IR data acquisition system. This work involves the final design, development, system integration, assembly, packaging, testing, and operation of the system. Before this work was begun, a revolutionary thermography technique had already been developed and proven. Most of the hardware had already been acquired, and an indepth calibration procedure had been performed and verified to prove the accuracy of the thermography system. However, a significant amount of work was required to integrate every component into a complete turnkey system. This system now has the capability of automatically tracking vehicles during flight and returning absolute surface temperature maps with an accuracy of over 99%. The system is engineered to be completely selfcontained and independent. It is packaged in a total of five rugged shipping cases. Each case is specifically designed to accommodate its contents so that they are protected during transit and easily packed and unpacked. The system can be transported by a variety of methods, and each case meets the requirements for most commercial delivery companies. The system requires no more than two operators and can be operated by only one experienced user.

6

This system is designed to include all of the desirable characteristics of a state-ofthe-art temperature measurement device. All information is acquired non-intrusively, so there is no effect of the measurement equipment on the flight vehicle. All images are collected during normal flight operations of the vehicle, so no elaborate laboratory or wind tunnel set-ups are required. The system uses a state-of-the-art camera so that highspeed (30 Hz), moderate-resolution (320 x 256) digital images can be acquired. Unlike typical IR systems, it does not require highly expensive IR optics. This thermography system collects data in the near-IR (NIR) spectrum, close to the visible spectrum, enabling the use of less expensive, commercially available optics. Finally, this system gives an absolute global surface temperature map without the need to know any properties of the target vehicle. This eliminates the need for extensive onboard instrumentation, which is costly and difficult to integrate into the vehicle, and can reduce the overall performance of the vehicle due to the added weight of the instruments and the required data handing systems. This system is much smaller, less complex, and more accurate than similar systems, making it more than 10 times less costly to produce and extremely portable and easy to operate.

This thesis is intended to provide a complete overview of the entire system. The theory involved in the thermography technique is outlined in Chapter 2. All of the individual hardware components are described in Chapter 3. Their contributions to any of the subsystems as well as how they were integrated into the complete system is discussed. Chapter 4 provides an explanation of the calibration procedure, which was developed and performed by another researcher. This chapter shows how an absolute

7

temperature can be determined using this specific thermography system and the ratio intensity technique. Chapter 5 is a thorough discussion of the packaging system that was designed as a portion of this work. It shows how numerous features were incorporated into the packaging that make the system extremely user friendly and more portable than any other similar system. An operational procedure has been developed during the course of this work, and it is described in Chapter 6. While actual testing of this system for its intended purpose is beyond the scope of this research, some preliminary testing was performed to evaluate the operational techniques and prove the functionality of the system. The testing is discussed in Chapter 7 along with how it was used to develop the operational procedure and contributed to decisions to add features, which have greatly improved the system. Chapter 8 concludes the thesis with a discussion of the objectives achieved during the course of this research. It also provides suggestions for system improvements, alternate applications, and future research.

8

2

2.1

Radiation

Introduction

In 1800, the physicist William Hershel held a mercury thermometer in the spectrum produced by a glass prism to measure the heat content of different colored lights. He found that the thermometer registered an increase in temperature even when held beyond the red end of the spectrum, where there was no visible light. This was the first experiment to show that an invisible form of light could transmit heat. This invisible form of light later became known as the infrared. Hershel also showed that infrared radiation obeyed the same laws as visible light. However, no infrared detectors were available until 1830. The first detectors were based on the thermocouple and were called thermopiles. Later, in 1880, the bolometer was developed. It relied on a material whose electrical resistance varies with temperature and represented a significant advance in radiation detection sensitivity. Quantum detectors underwent development between 1870 and 1920. These detectors did not depend on the creation of an electrical signal due to the heating effect of radiation, but they allowed the direct conversion of radiation into an electrical signal. In the following years up until the present, photoconducting detectors have been developed and improved upon. These detectors now have much higher sensitivities and shortened response times. They are made out of a variety of semi conducting materials and are able to measure radiation throughout the entire infrared spectrum.

9

2.2

Radiation Basics

The process by which heat is transferred from a body by virtue of its temperature, without the aid of any intervening medium, is called thermal radiation. Radiation is sometimes described in terms of electromagnetic waves that travel at the speed of light. From the quantum mechanics point of view, radiant energy is transported by photons that travel at the speed of light. These photons are defined as discrete packets of energy. The energy associated with each photon, ep, is given by ep

h⋅ν

(1)

where h is Plank’s constant, and ν is the frequency in s-1. The energy spectrum can also be described in terms of the wavelength of radiation, λ, which is related to the propagation velocity and the frequency by

λ

c ν

(2)

where c is the speed of light in a vacuum (2.998 x 108 m/s).

Electromagnetic radiation includes many different types, which are usually classified according to wavelength. The wavelength range encompassed by thermal radiation falls approximately between 0.1 and 100 microns. Figure 2-1 shows a portion of the electromagnetic spectrum and the terms used to describe the wavelength divisions.

10

Figure 2-1: Electromagnetic Spectrum

All objects that have temperature emit thermal radiation, and the wavelength of radiation depends on how the radiation is produced. Thermal radiation always encompasses a range of wavelengths. The amount of radiation leaving a body per unit wavelength is called ‘monochromatic exitance.’ Monochromatic radiation is emitted at a single wavelength, and the word ‘spectral’ is used to denote the dependence on wavelength. The spectral distribution depends on the temperature and surface characteristics of the emitting body, as well as on the media surrounding the radiator. The spectral exitance from a given body can have three sources: energy emitted by the body, energy transmitted through the body, and energy reflected off the body. Emitted energy can consist of energy created within the body and energy absorbed by the body from outside sources and then emitted. Transmitted energy is energy that passes through the body without first being absorbed. Reflected energy is energy from outside sources that is bounced off of the body without first being absorbed. The terms emissivity (ε), transmissivity (τ), and reflectivity (ρ) are used to describe the total exitance from a body. These quantities can be functions of wavelength (λ), temperature (T), direction (α), and the physical state (s) of the given object. The emissivity, transmissivity, and reflectivity are related by the following equation:

11

ε (λ , T , α , s ) + τ (λ , T , α , s ) + ρ (λ , T , α , s )

1

(3)

The system to be discussed in this thesis is used to image space shuttle thermal protection system (TPS) tiles. These tiles are opaque to radiation at the wavelengths of interest (1.3µ and 1.55µ); therefore, the transmissivity term goes to zero. Also, the TPS tiles are diffuse, or their properties are not dependent on direction. Lastly, the emissivity and reflectivity are independent of wavelength for the temperature range of interest (800K-1400K). This allows the reduction of Equation 3 into

ε( T) + ρ ( T)

1

(4)

Reflectivity becomes important for objects that have low emissivity or when there are significant outside sources of heat. For the current application, possible sources of reflected energy are the Sun, Moon, Earth, and clouds. According to previous studies, the contributions from these objects are low under normal operating conditions, and any reflected radiation coming from the vehicle will not affect the response of the IR detector [9]. As a result, the emissivity term is the only term that must be included in the theoretical analysis.

12

2.3

Blackbody

A blackbody is a body that absorbs all incident radiation and emits the maximum possible amount of radiation over the entire electromagnetic spectrum. The blackbody is a theoretical concept that sets an upper limit to the amount of radiation that can be emitted or absorbed by a given object. Absorptivity (α) is the fraction of incident energy on a surface that is absorbed by the surface. By nature, the emissivity and absorptivity of an object have the same value for a given wavelength. For a blackbody, the emissivity and absorptivity are equal to unity.

In 1900, Max Plank derived a relationship showing how the emissive power of a blackbody is distributed among the different wavelengths. The equation became known as Plank’s Law and is given as

d dλ

E( λ , T)

C1

⎛ ⎞ ⎜ 5 λ⋅ T λ ⋅⎝ e − 1⎠

;

C2

13

2

C1

2⋅ π⋅ h ⋅ c

h

6.625 × 10

c

2.998 × 10

k

1.380 × 10

, − 34

C2

h⋅

c k

J⋅ s

8 m

sec

− 23 J

K

(5)

Here, the term on the left is the spectral emissive power, h is Plank’s constant, c is the speed of light in a vacuum, and k is Boltzman’s constant. This function can be plotted as a family of curves corresponding to different temperatures. Figure 2-2 shows

Spectral Emissive Power (W/m^2/micron)

Plank’s Law as a function of wavelength for the temperature range of interest.

1 .10

5

1 .10

4

1 .10

3

100

10

0

2

4

6 8 Wavelength in microns

10

12

14

1400K 1200K 1000K 900K 800K

Figure 2-2: Plank's Law

The total emissive power of a blackbody at a given temperature can be determined by an integration of Plank’s Law given in the form

14

E( λ , T)

⌠ ⎮ ⎮ ⎮ ⎮ ⌡



C1

⎛ C2 ⎞ ⎜ 5 λ⋅ T λ ⋅⎝ e − 1⎠



0

(6)

This integration can be preformed analytically to yield the Stephan-Boltzman Law

E( T)

σ⋅ T

4

σ

;

−8

5.669 × 10

W 2 4

m K

(7)

In the above equation, σ is the Stephan-Boltzman constant, and E(T) is the total emissive power over the entire wavelength spectrum of a blackbody at a given temperature. All real bodies emit energy at a rate lower than a blackbody. If a body emits at a constant fraction of the rate of a blackbody over all wavelengths, it is termed a graybody. In other words, the emissivity and reflectivity of a graybody are independent of wavelength. At a given temperature, assuming no outside reflective contributions, the total emissive power of a graybody is equal to the total emissive power of a blackbody scaled by its emissivity:

E( λ , T)

⌠ ⎮ ⎮ ⎮ ⎮ ⌡



ε( T) ⋅ C1

⎛ C2 ⎞ ⎜ 5 λ⋅ T λ ⋅⎝ e − 1⎠

0



(8)

15

It may also be of interest to determine the emissive power of an object over a finite range of wavelengths: λ

E( λ , T)

⌠ 2 ε( T) ⋅ C1 ⎮ dλ ⎮ ⎛ C2 ⎞ ⎮ ⎜ ⎮ λ5⋅ ⎝ e λ ⋅ T − 1 ⎠ ⌡λ 1

(9)

This case requires the integration to be performed numerically. This has been done, and the results are available in tabular form for various wavelength/temperature combinations. The integration can also be approximated with modeling software for specific wavelength ranges and plotted as a function of temperature. Figure 2-3 and Figure 2-4 show the filter transmission curves for the specific filters used in this analysis. Figure 2-5 shows the emissive power distributions over the temperature range of interest using the Full Width Half Maximum (FWHM) of the selected filters as the wavelength range. The FWHM for the selected filters are centered at 1.3 and 1.55 microns, respectively. The bandpass filters also have a peak transmittance factor that was incorporated into the curves.

16

Figure 2-3: 1.3 Micron Filter Transmission Curve

Figure 2-4: 1.55 Micron Filter Transmission Curve

17

100

Emissive Power (W/m^2)

80

60

40

20

0 700

800

900 1000 Temperature (K)

1100

1200

1.3 micron filter 1.55 micron filter

Figure 2-5: Emissive Power Distribution

2.4

Ratio-Intensity-Thermography

Ratio-Intensity-Thermography is a technique in which the ratio between the intensity distributions of two distinct wavelengths is used to determine temperature. When two intensities are divided, the intensity ratio is related to the temperature of the radiating body by

18

λ

Ratio( T)

( ) I( λb , T)

I λa , T

⌠ 2a ε( T) ⋅ 2⋅ C1 ⎮ dλa ⎮ ⎞ ⎛ C2 ⎮ 5 ⎜ λa⋅ T ⎮ λa ⋅ ⎝ e − 1⎠ ⌡λ 1a

λ

⌠ 2b ε( T) ⋅ 2⋅ C1 ⎮ dλa ⎮ ⎛ C2 ⎞ ⎮ 5 ⎜ λb⋅ T ⎮ λb ⋅ ⎝ e − 1⎠ ⌡λ 1b

(10)

For this analysis, the emissivity is assumed to be a function of temperature only and so it can be moved outside the integral. It can be assumed that the emissivity values are the same at each wavelength over the temperature range in question; therefore, it can be cancelled out of the equation altogether [9]. This results in a unique ratio-totemperature relationship for intensities measured over known wavelength ranges, regardless of the surface properties. Notice that the wavelengths in question reside on the steep side of the blackbody curve (Figure 2-2). This allows for a measurable difference in intensities for varying temperatures as shown in Figure 2-5. The intensity distribution curves can now be divided, plotted as a wavelength ratio (1.3µ/1.55µ), and used to determine the theoretical ratio response shown in the following figure:

19

4 800

1400

3.5

3

Ratio(T)

2.5

2

Temperature Range of Interest

1.5

1

0.5

0

0

1000

2000

3000 4000 Temperature (K)

5000

6000

7000

1.3/1.55

Figure 2-6: Theoretical Ratio Response

Figure 2-6 shows that there is a relatively sharp rise in the ratio response over the temperature range of interest. This is because the chosen wavelengths reside on the steep side of the blackbody emittance curve. At very high temperatures, the curve tends to flatten out, resulting in much less sensitivity. The temperature range of interest also lies on an approximately linear region of the ratio response curve. Figure 2-7 shows a closeup view of the ratio response over 700K-1500K:

20

1.6

1.4

1.2

Ratio(T)

1

0.8

0.6

0.4

0.2

0 700

800

900

1000 1100 1200 Temperature (K)

1300

1400

1500

1.3/1.55

Figure 2-7: Ratio Response

This curve gives a theoretical intensity-ratio to temperature relationship. However, there are other factors that must also be considered before an accurate temperature reading can be determined.

2.5

Irradiance vs. Emissive Power

This thermography system does not measure the total emissive power of the test object, but it measures the irradiance on the detector. The detector only sees a small 21

amount of the total emissive power. The irradiance is the fraction of the total emissive power that reaches the detector face. For example, if the object in question were radiating energy into a hemisphere out to the distance of the detector, the total emissive power would be all of the energy that passes through the surface of the hemisphere, while the radiation incident on the detector would only be the amount that reaches an area of the hemisphere that is the size of the collection optics. The total emittance is also affected by other factors before it reaches the detector. It is affected by atmospheric absorption {Atm(λ)}, the surface reflections of the optics {η(λ)}, the attenuation of the bandpass and neutral density filters {Γ(λ)}, the dispersion effect of the telescope {ω(λ)}, the f/number of the optical system {f/number}, and the integration time {t} and quantum efficiency {QE} of the camera. Many of these factors are equal over the wavelengths of interest so they factor out of the equation. Properties of the camera can also be factored out because they will always be present, and they are independent of wavelength. The rest of the terms can be brought outside of the integral and combined into a scaling factor for the final ratio equation. The following equations show the ratio equations with all of the terms and then with the remaining terms outside the integral.

22

λ

Ratio( T)

( ) I( λb , T)

I λa , T

⌠ 2a f ⎮ Atm( λ) ⋅ Γ ( λ) ⋅ ω( λ) ⋅ η ( λ) ⋅ ⋅ t⋅ QE⋅ ε⋅ 2⋅ C1 ⎮ number dλ ⎮ ⎞ ⎛ C2 ⎮ ⎮ 5 ⎜ λ⋅ T λ ⋅⎝ e − 1⎠ ⎮ ⌡λ 1a

λ 2b

⌠ ⎮ ⎮ ⎮ ⎮ ⎮ ⎮ ⌡λ

Atm( λ) ⋅ Γ ( λ) ⋅ ω( λ) ⋅ η ( λ) ⋅

f number

⋅ t⋅ QE⋅ ε⋅ 2⋅ C1

⎞ ⎛ C2 5 ⎜ λ⋅ T λ ⋅⎝ e − 1⎠



1b

(11)

λ

Ratio( T)

( ) I( λb , T)

I λa , T

⌠ 2a 2⋅ C1 Atma⋅ Γ a⋅ ωa⋅ η a⋅ ⎮ dλ ⎮ ⎞ ⎛ C2 ⎮ 5 ⎜ λ⋅ T ⎮ λ ⋅⎝ e − 1⎠ ⌡λ 1a

λ

⌠ 2b 2⋅ C1 Atmb⋅ Γ b⋅ ωb⋅ η b⋅ ⎮ dλ ⎮ ⎞ ⎛ C2 ⎮ 5 ⎜ λ⋅ T ⎮ λ ⋅⎝ e − 1⎠ ⌡λ 1b

Equation 12 shows that the ratio response is independent of surface emissivity, integration time, quantum efficiency, and f/number.

23

(12)

3

System Components

The complete thermal imaging and tracking system is called HARVII (High Altitude/Reentry Vehicle Infrared Imager). HARVII consists of many components, all of which had to be integrated into one complete turnkey system. The separate components include the tripod-wedge-tracking mount-telescopes assembly, the Ratio-IntensityThermography-System (RITS), the CCD tracking cameras, the computer systems, the video displays, and the power supply. These components combine to make up two separate subsystems: the infrared imaging/data acquisition system and the automated tracking system. Together, these two subsystems combine to form a lightweight, portable system capable of making highly accurate, non-intrusive, global surface temperature measurements of in-flight vehicles.

3.1

RITS Components

The RITS system is made up of a Near-Infrared (NIR) camera, a Barlow lens, a six-position motorized filter wheel, and an 11” Schmidt-Cassegrain telescope. Together, they are capable of imaging a vehicle at over a 500,000 ft distance, and at speeds of over Mach 25 while collecting useful temperature data in the range of 800-1400 K. Figure 3-1 shows the basic components of the system.

24

Figure 3-1: RITS Components

The 11-inch Schmidt-Cassegrain telescope is used for the infrared thermal imaging. The Schmidt-Cassegrain optical system uses a combination of lenses and mirrors and is referred to as a compound telescope. This unique design offers large diameter optics while maintaining very short tube lengths, making it extremely portable. The CM-1100 is an f/10 telescope with a focal length of 110 inches and a field-of-view (FOV) of 708 arc-seconds. It employs a Schmidt corrector plate to correct for spherical aberrations caused by the mirrors. The HARVII system uses the Merlin NIR camera offered by Indigo Systems Corporation. The camera consists of a 320 x 256 Indium Gallium Arsenide (InGaAs) focal plane array (FPA). The FPA is sensitive in the 0.9 to 1.68 micron wavelength range and is thermoelectrically cooled to 291 K. The camera is capable of acquiring frames in 25

12 bit uncorrected raw format, a 12 bit non-uniformity corrected (NUC) format, or an 8 bit dynamically enhanced format. The frames can be acquired via an external trigger or by a manual software command with Indigo System’s Talon Light Data Acquisition Software. The InGaAs FPA is a 9.6 mm by 7.7 mm format detector with square pixels measuring 30 microns in each dimension. It has an operating range of 291 +/- 50 K, and it has a high quantum efficiency of over 80%. The individual pixels vary in response to intensity, so Indigo Systems employs a NUC algorithm to obtain a uniform response across the entire detector with an error of less than 0.5% [10]. To collect useful data during hypersonic boundary layer transition, it has been determined that it is necessary to track the vehicle from its Mach 10 to Mach 4 speed range. From space shuttle entry data, this speed range occurs over an altitude range of approximately 250,000 feet to 100,000 feet. The tracking system will normally be at look-up angles of about 45 degrees to about 60 degrees corresponding to a total slant range of 280,000 feet to 140,000 feet [6,9]. During data acquisition the shuttle will have a tendency to move around within the detector area. If the shuttle moves out of the FOV, any data collected will be unusable. A focal length of 165 inches has been selected for the main optical system. This focal length results in an f/number of f/15. At f/15, the imaging system has an angular FOV of 472 arc-seconds by 379 arcseconds. This corresponds to about 1.5 arc-seconds per pixel. At a slant range of approximately 140,000 ft, the largest dimension of the shuttle (122.25 feet) will fill 75% of the horizontal dimension of the detector (320 pixels). This will allow for significant

26

movement of the target in the detector area from frame to frame without any lost data, while still maintaining a high resolution. The resolution at a slant range of 140,000 feet is about 6 linear inches per pixel, while at 280,000 feet it is about 12 linear inches per pixel. The Barlow lens was used to increase the image scale from the telescope prime focus. The Barlow lens acts as an f/# multiplier and increases the focal length to obtain the desired image scale. It is a diverging lens with a negative focal length. Figure 3-2 shows the principle behind the Barlow lens and the governing equation where A is the amplification factor. A Barlow lens with a focal length (FB) of –2 inches that is mounted 1.0 inches (D) from the NIR detector will give an amplification factor of 1.5. This increases the f/number from f/10 to f/15 and the focal length from 110 inches to 165 inches for the main imaging system. The lens is multi-coated, including a broadband anti-reflection coating, giving it over a 99% transmission rate in the 1.0 to 1.6 micron wavelength range.

A = D / |FB| + 1 Figure 3-2: Barlow Lens

27

A six-position filter wheel is used to acquire data at two different wavelengths for three different attenuation factors. The filter wheel rotates six different filter combinations into position in front of the detector as images are acquired so that data are collected at three different attenuation factors for each of the two chosen wavelengths. The wavelength/attenuation filter combinations are 1.3µ/0%, 1.55µ/0%, 1.3µ/50%, 1.55µ/50%, 1.3µ/90%, and 1.55µ/90%. During the data analysis procedure, the two images obtained at 0% attenuation are used to give one temperature map, the two images obtained at 50% attenuation give a second temperature map, and the two images taken at 90% attenuation give a third temperature map. Therefore, one revolution of the wheel will provide three independent sets of data that are used to determine temperature. Using these different attenuation filter pairs allows the camera to collect data over a greater temperature range without saturating the detector. This combination also provides three useful sets of data for the temperature range for boundary layer transition. HARVII’s filter wheel was purchased from Integrated Scientific Imaging Systems Inc. It uses a MicroMo motor with a one arc-minute resolution encoder. The wheel also contains an LED and photodiode receiver that can be used as a software trigger each time the wheel makes a revolution. The motor is controlled by an MVP 2001B01 intelligent drive, which connects to the computer via a RS-232 serial communication port using a crossover cable. Figure 3-3 is a photograph of the filter wheel and the MVP 2001 controller. The filter wheel consists of a stationary outer casing with an internal spinning wheel. A gear on the motor shaft spins the wheel, which has matching gear teeth around its circumference. The wheel’s casing has a 1-inch opening for the optical path, and it is made up of two separate pieces that can be disassembled so that the filters can be

28

installed. There are six 1-inch openings in the wheel for the filters to be mounted. Figure 3-3 shows the casing in two pieces. The wheel is mounted in the back cover, and the front cover is removed to show the wheel. When fully assembled, the casing completely covers the wheel except for a 1-inch opening in the front and back covers. As the wheel rotates, the filters are alternated in front of the opening.

Drive Gear

Wheel

Openings for optical path

Figure 3-3: Filter Wheel with MVP 2001

The Barlow lens, filter wheel, and Merlin NIR camera attach to each other and the 11-inch telescope by 1-inch, light-tight optical tubes and adapters. These optical tubes however, do not provide any structural support for the RITS, which weighs over five pounds and protrudes off of the telescope by more than a foot. An aluminum bracket design is used to provide proper support for the RITS and keep all optical components correctly aligned. The bracket is constructed out of high strength, 1-inch aluminum rails with the longest arm being doublewide. It attaches to four pre-existing holes in the Celestron mounting bracket and extends downward where it connects to the Merlin camera via a linear motion attachment. The linear motion attachment allows the Merlin 29

NIR camera, filter wheel, and Barlow lens to slide onto the bracket as one piece then attach to the Celestron telescope with a free-spinning c-mount adapter. The RITS bracket also contains a mounting adapter for the 75-mm fixed focal-length tracking lens and CCD camera.

3.2

Tracking Components

Tracking is accomplished using an Archimage motorized tracking mount. The Archimage is a precision controlled, high-speed tracking mount that responds to a manual button pad or various telescope control software packages that use the Astronomical Command Language (ACL). The mount is powered by a 15-volt DC power module and two custom gearmotors manufactured by Pittman Motor Company. Each motor uses a 500 line optical encoder for software communication. The tracking mount is supported by a heavy-duty tripod and wedge. Both are easily set up and fully adjustable. The tripod is a Meade Giant Field Tripod. The wedge was exclusively designed to be used with the Archimage mount. It allows for the telescopes to be mounted equatorially or in an Altitude/Azimuth (Alt/Az) configuration. It also provides an interface between the tracking mount and the automated tracking system. It includes a custom cable connection to the tracking mount as well as connectors for power, a serial communication cable, and a manual button pad controller. The dual-axis tracking mount is physically connected to the wedge by a dovetail

30

connector and has two more dovetail connectors for telescopes, which allow for multiple telescopes to be mounted and move synchronously. A 90-mm diameter refracting telescope and wide-angle, variable-zoom lens are mounted together on one side of the tracking mount. The 90-mm refracting telescope is an Orion ShortTube 90 model. It has an F/# of F/5.6 and approximately a 1° FOV when used with a standard 1/3-inch CCD camera. The zoom lens has a variable area aperture stop that can be adjusted from 3.5-mm to 22-mm and an adjustable focal length from 28mm to 200-mm. This gives the lens an F/# range of F/1.3 to F/57 and a FOV range from 21° down to 2°. A CCD camera and video lens combination is mounted on the Merlin bracket on the Celestron side of the tracking mount. This is a 75-mm focal length lens, which provides a 6° FOV that is used for initial acquisition of the target vehicle within the tracking software program. CCD cameras are used to image the target vehicle through the Orion telescope and zoom lens as well. The vehicle is first acquired visually with the wide-angle lens and brought into the FOV of the 75-mm lens. The tracking system then uses the 75-mm lens to begin tracking the target and center the vehicle in the FOV of the Orion telescope. The tracking system then uses the smaller FOV of the Orion telescope to center the vehicle in the FOV of the imaging system for data acquisition.

31

3.3

Computer System

HARVII requires two separate computers: one for tracking and one for data acquisition. The two systems are required because the NIR camera interface software is only compatible with the Windows NT operating system, while the tracking software requires the Windows 2000 platform. Both systems can however be simultaneously operated from the same monitor, keyboard, and mouse through the use of a network cable and Remote Administrator software. Both computers were assembled from much of the same hardware components. Each system is enclosed in a 4U rack mountable chassis that is equipped with a shockmounted drive bay and a 300W ATX power supply. They both use the ASUS P4PE motherboard and a 3.06 GHz Pentium 4 processor. Each machine also contains a 120Gb IDE hard disk drive, a CD-ROM drive, and a 3-1/2” floppy disk drive. The data acquisition computer also comes equipped with an 18Gb high-speed video drive for high frame rate video transfer. The video drive is a Seagate Cheetah ST318451LW, Ultra-160 SCSI drive. It operates at 15000 RPM and is capable of transferring data at 160Mb/sec. It is connected to the system through an Adaptec 29160 SCSI controller card on the PCI bus. This computer also contains the Bitflow Roadrunner digital framegrabber PCI card for video capture from the NIR camera. The tracking computer runs C-Sat Satellite/Video Tracking software developed by Celestial Computing Incorporated. It is a user-friendly telescope control program specifically developed for the Archimage tracking mount. The computer communicates

32

with the mount via an RS-232 serial communication port, and an Epix Pixci SV4 framegrabber card on the PCI bus captures the video.

3.4

Displays

HARVII will normally be operated in the field during daylight hours with little or no cloud cover. Even indirect sunlight can completely flood a standard brightness (about 200 nits) LCD monitor; therefore, these conditions require displays with enhanced brightness so that the user can easily view them. The computer systems use a sunlightreadable, LCD monitor that is capable of brightness up to 1800 nits. The monitor has a maximum resolution of 1024 x 768 and four preset brightness settings, as well as an automatic brightness setting that will adjust the brightness via an ambient light sensor. The brightness control buttons and the ambient light sensor are mounted beside the monitor for easy access. Each CCD camera is monitored via a Sharp, 10-inch, high-brightness (400 nits), LCD display. The displays and computer monitor are mounted inside a shipping case, which together with a sunshade reduces glare during daylight viewing. The output from each camera is fed through a video distribution amplifier so that it can be split and input into both the framegrabber board as well as an LCD display without any loss of signal strength. Pre-mounting the displays in the case provides an effective way to ship them and also reduces setup time. The display rack and sunshade are constructed out of high

33

strength, 1” aluminum rails and a 1/8” honeycomb aluminum panel. Together, they provide a sturdy mounting structure as well as protection during transit.

Figure 3-4: Display Rack

34

4

RITS Calibration

An extensive calibration procedure has been developed and performed on the HARVII system. It is described in detail in reference 9 and will be discussed briefly here.

4.1

Non-Uniformity Correction

The Merlin NIR detector does not have a uniform response to photon flux from pixel-to-pixel. In a uniform illumination the individual pixels can vary in response by as much as 50%. While this would not be a problem for examining stationary objects, objects imaged with this system will tend to move between pixels from one image to the next. When the ratio thermography technique is applied between two images, this could result in errors of up to 80% in absolute temperature if the object has moved to different pixel locations within the detector from one image to the next. The pixel-to-pixel variation is corrected using Indigo Systems’ Non-Uniformity Correction (NUC) command. This sets gain and offset coefficients that reduce any variations between pixels to less than 2%. The camera’s gain, integration time, and video offset are set for the desired temperature range. The temperature range of interest for this system is about 800-1400K; therefore, a uniform source at 800K was imaged, and the gain, integration time, and offset were adjusted until the camera gave a response equal to 10% of the 12-bit scale. The source was then heated to 1400K and the camera was adjusted to give a response equal to 90% of the 12-bit scale. This was continued until

35

one combination of gain, integration time, and video offset gave the desired responses at each end of the temperature range. The manual two-point NUC command is then used to allow the camera to correct for any bad pixels and set internal gain and offset coefficients to correct for variations in individual pixel response.

4.2

Detector Characterization

A procedure to characterize the camera’s response to a change in intensity was developed. From simple optics and blackbody theory, a change in incident power on the detector can come in 3 ways. The first is a change in f/number of the optical system (a change in the effective aperture of the collection optics). The second is a change in the camera’s exposure time, which is electronically controlled by the camera’s integration time gate. The third is a change in power of the radiation source. Experiments were conducted to determine the camera’s response to changes in f/number and integration time using a constant intensity blackbody radiator. The ability to vary intensity with the radiator is possible, but the intensity does not give a linear response with temperature (Figure 2-5) and is handled separately in the next section. Data from these experiments showed the camera’s response to changes in f/number and integration time while imaging a constant intensity source. This allowed the user to map the camera’s response to a normalized count scale, which compensated for the internal gain and offset coefficients set during the NUC procedure, as well as corrected for slight non-linearity in the camera’s response. The data from the measurements were used in a curve-fitting program to determine the best re-mapping equation for the camera’s

36

response. The equation is given below in Figure 4-1, and a plot of the function is shown in Figure 4-2 for an expanded 16-bit scale.

A= -35264.488 F( x)

A+

B⋅ x ln( x)

+

C 0.5

x

+

D⋅ ln( x) 2

x

B= 15.607797 C=2.203606371 x 106 D= -1.47288848 x 1010

Figure 4-1: Merlin Re-Mapping Equation

This equation allows the user to map the camera’s response, x, into a number, F(x), that is proportional to the number of photons incident upon the detector. The equation has the same form for each attenuation pair because it a function of the camera response only.

37

Output Counts

6 .10

4

5 .10

4

4 .10

4

3 .10

4

2 .10

4

1 .10

4

0 0

1 .10

4

2 .10

4

4 4 3 .10 4 .10 Input Counts

5 .10

4

6 .10

4

Figure 4-2: Merlin Re-Mapping Function

4.3

Laboratory Calibration

To determine the camera’s response to changes in source intensity, laboratory data were collected at 30K intervals over the total range of the system for each bandpass filter/attenuation filter combination. The detector was uniformly illuminated using a blackbody radiation source with a quoted emissivity of 0.9995 up to 1473K. The radiator was imaged through a 1-inch optic onto the detector due to the limited size of the laboratory. The measured intensities were then averaged across the detector and processed through the Merlin characterization equation. The remapped intensities were then divided (1.3µ/1.55µ), giving three ratios, one for each attenuation value. Individual

38

offset coefficients were assigned to each dataset corresponding to attenuation until each attenuation pair produced a monotonically increasing intensity ratio curve with respect to temperature. These individual offset coefficients were required due to non-uniform transmission bands and different attenuating factors for each bandpass/attenuation filter combination. The new offset coefficients that are used in the Merlin mapping equation (Figure 4-1) are given below. The coefficients B, C, and D remain unchanged in the remapping equation.

Coefficient

0% Attenuation

50% Attenuation

90% Attenuation

A

-35381.974

-35458.5

-35445.5

Figure 4-3: Individual Offset Coefficients

The resulting intensity-ratio to temperature curves are given in the following figure.

39

1500

1400

Temperature (K)

1300

1200

1100

1000

900

800 0.2

0.4

0.6 0.8 Intensity Ratio

1

1.2

0% Attenuation 50% Attenuation 90% Attenuation

Figure 4-4: Temperature-Ratio Response

These curves were then used in a curve-fitting program to determine the best equations for temperature as a function of intensity ratio. Figure 4-5 shows the temperature-to-ratio relationships for each attenuation pair as well as the useful range for each pair. In the equations, x is the ratio value obtained by dividing the remapped pixel count values of a 1.3 micron image by the remapped values of the corresponding 1.55 micron image. T(x) is the actual surface temperature corresponding to each ratio value.

40

0% Attenuation Ratio Temperature Equation T( x)

E = 15381.178 F = -5373.659

I

2

E + F⋅ x + G⋅ x⋅ ln( x) + H⋅ ln( x) +

0.5

x

G = 1964.786

812K < T( x) < 1190K

H = 2461.063

Extrapolate down to 750K

I = -8707.107

50% Attenuation Ratio Temperature Equation

J = 4037.792

T( x)

2

J + K⋅ x ⋅ ln( x) + L⋅

ln( x) x

K = 200.361

M

+

1.5

x

812K ≤ T( x) ≤ 1260K

L = -3308.209 M = -2715.722

90% Attenuation Ratio Temperature Equation

N = 778.084

2

O = 637.188

T( x)

4

6

N + O⋅ x + P⋅ x + Q⋅ x

P = -72.117

840K ≤ T( x) ≤ 1450K

Extrapolate up to 1625K Figure 4-5: Ratio-Temperature Equations

41

Q = 5.6556

As shown in Figure 4-5 and Figure 4-4, each Intensity-Ratio equation is not valid for the entire dynamic range of the system. Only the 0% attenuation pair returns useful data between 750K-812K because not enough energy can be transmitted through the other filters to produce an adequate response at the detector. Between 812K-840K, the 0% attenuation pair and the 50% attenuation pair are useful. All three sets of data are useful between 840K-1190K. From 1190K-1260K, only the 50% attenuation pair and the 90% attenuation pair are valid because the intensity of the energy collected through the 0% attenuation pair begins to saturate the detector. Only the 90% attenuation pair can be used to determine temperature between 1260K-1625K. Since data are acquired at 30 Hz, and then the ratio of two images is used to determine temperature, the effective framerate is reduced to 15 Hz. The 15 Hz frame-rate is only valid when the temperature being measured is within the range of all three intensity-ratio equations (840K-1190K). If only two equations are valid, then the frame-rate is 10 Hz. The frame-rate becomes 5-Hz when only one equation can be used at the extreme ends of the system’s dynamic range.

4.4

Optical Efficiency Characterization

The 11-inch Schmidt-Cassegrain telescope uses a Schmidt corrector plate to compensate for spherical aberrations caused by the mirrors. Transmission through the corrector plate in the NIR spectrum can be affected by Oxygen-Hydrogen impurities present in the float glass. Dispersion effects and absorption due to these impurities could

42

possibly cause a difference in transmission rates for the 1.3-micron and 1.55-micron filters. Experiments were conducted to compare the optical efficiency of the telescope optical system to the 1-inch lens optic. An integrating sphere was used as a uniform light source, and data were collected using both the RITS optical system and the 1-inch optic through each bandpass/attenuation filter combination. During data acquisition, the integration time was varied from 10 to 15 milliseconds to give a statistical comparison. At each integration time, 16 frames were recorded then averaged to ensure uniformity of the source. To determine the optical efficiency of the system, the intensities were averaged and divided (1.3µ/1.55µ) to form a 1-inch system intensity ratio and a RITS intensity ratio. These relations cancel out the effects of different f/numbers and integration times. These two intensity ratios are then combined (1-inch ratio/RITS ratio) to cancel out the effects of the intensity amounts, filter attenuation, and the QE of the camera. The result of this combined relation is a scale factor relating only the optical efficiencies of the two systems. It was assumed that the transmission through the 1-inch optic was the same for the 1.3-micron and 1.55-micron filters due to the material of the optical element (BK-7). This gave the 1-inch optic an optical efficiency value of one. The only factor that is left is the relative transmission efficiency of the 1.3-micron divided by the 1.55-micron wavelength band for the Celestron telescope system. Figure 4-6 gives the results of the analysis of the optical efficiency of the RITS system.

43

Attenuation Pair

Efficiency Factor

0%

1.282

50%

1.184

90%

1.195

Figure 4-6: Optical Efficiency Factors

The right side of Figure 4-6 gives the scaling factor by which the measured intensity ratio must be multiplied to get the actual intensity ratio for each attenuation combination. These rescaled ratios are then used in the ratio-intensity equations to determine temperature.

4.5

Verification

To validate the calibration, measurements were conducted of a heated TPS tile at a distance of 191 feet. An enclosure made of high temperature insulation was built around the tile to provide a more stable environment and reduce reflective contributions to the tile surface. The actual temperature of the TPS tile was measured by a high temperature thermocouple. The tile was heated then imaged through the RITS at all attenuation and bandpass filter combinations and the data was downloaded onto a hard drive. The data was then processed through the appropriate equations to yield temperatures that could be compared to the thermocouple data. The comparison at all attenuations produced an error on the order of 1% of the absolute temperature.

44

5

Packaging

The packaging of HARVII was designed for ease of mobility, set-up, and operation. The entire system is packaged into five shipping cases and each case is within the size and weight limits for transport by most standard delivery companies. Four of the cases are rotationally molded out of high-quality polyethylene resins. The fifth case is a welded aluminum shock-mount case, which contains a shock-isolated, 19-inch rack for standard rack-mount equipment. All cases provide protection from dirt and moisture with airtight/watertight gaskets around all lids and openings. They are also each equipped with a pressure relief valve. Several components are semi-permanently mounted and pre-wired in their respective cases to reduce the time and effort required for set-up. The following section gives a description of each case, its contents, and its features. Each case has been assigned a number from 1 to 5 corresponding to the order that it is unpacked.

5.1

Case 1

Case 1 contains the tripod, mounting plate, wedge, the RITS support bracket, a brace for the Orion telescope’s CCD camera, and the folding chairs. The mounting plate is attached to the tripod by six bolts, and the two pieces do not have to be separated for packaging. They are packaged as one piece and are placed into the case first. The spreader bracket and threaded rod are parts of the tripod that must be packaged as separate pieces. The tripod must be folded so that one leg fits in-between the others and

45

all three feet are level along the bottom of the case. The mounting plate must be standing vertically when the tripod is properly positioned in the case. The screw pegs must then be removed from the center leg. Now a piece of foam can be placed over the feet, and the wedge can be wrapped in foam and packaged on the ‘feet’ end of the tripod. The various other pieces are then positioned in the case, and the lid is attached.

Figure 5-1: Case 1

5.2

Case 2

Case 2 is fitted for the electric power generator and the tracking mount. The tracking mount is stored on its base in an upright position. The mount gets seated into a one-inch deep cutout in the bottom foam layers. The generator is then placed next to the mount leaving enough room for a one-inch layer of foam to the sides of both pieces. A double-ply piece of foam fits between the mount and generator. Then a large U-shaped piece of foam fits over both pieces, protecting them on the front, back, and top. The lid holds everything securely in place.

46

Figure 5-2: Case 2

5.3

Case 3

Case 3 is has two-tier design where the contents are divided into two levels. The lower level contains a one-piece foam insert that has custom cutouts for the 11-inch Celestron telescope, the tracking optics assembly, a CCD camera, and the power supply for the camera. Once each of these pieces is properly seated in its cutout, the one-piece upper foam insert can be placed on top. This insert was carefully shaped to cradle the lower components, and it has compartments for the other CCD camera and its power supply as well as for the NIR camera and its required accessories. The upper insert also 47

has some extra open space for items such as cables and manuals. Each CCD camera and the NIR camera are stored within separate, softer foam cavities in the main foam inserts.

Figure 5-3: Case 3-Lower Level

48

Figure 5-4: Case 3-Upper Level

5.4

Case 4

Case 4 is a welded aluminum shock-mount case. It contains a standard 10-space, 19inch equipment rack. The rack is attached to the case by four elastomer mounts. The mounts are specially tuned for the weight of the equipment and provide effective protection against shocks and vibrations. The rack holds both of the computer chassis, a keyboard/touch-pad drawer, and a surge suppressing power strip. Both the front and the back covers are removable for easy access to all the components and connectors.

49

5.5

Case 5

Case 5 is very unique in that it holds a rack for all of the displays. The rack is constructed out of high strength extruded aluminum rail and is permanently mounted to the floor of the case. The three, 10-inch LCD displays and the 15-inch computer monitor are all mounted on the rack. Most connections to these devices are semi-permanent and can remain intact during shipping. The power supplies for the 10-inch displays are located in the rack, behind the displays. They are all connected to a common power cord so that only one power connection has to be made. The video inputs are all fed through bulkhead connectors in the side of the case straight to their respective monitors. The computer monitor is carefully mounted to the rack with foam shock isolators. It also has a separate switch panel mounted to its left that allows the user to select from four brightness settings or it has an automatic brightness control using an ambient light sensor. There is a flap that hinges at the top of the rack and folds down to protect the displays during transit. This flap is made of an aluminum rail frame around a 1/8-inch honeycomb aluminum panel. When the flap is closed, it rests on a stopper rail and a foam layer separates it from the fronts of the displays. The entire rack was precisely designed to gently sandwich the displays with the foam for maximum protection against shock and vibration. The flap also doubles as a sunshade to reduce any glare on the displays during daytime operation. A foam insert containing softer, urethane foam rests on top of the display rack. This insert contains cutouts for numerous parts and accessories belonging to HARVII

50

including the filter wheel, the button pad hand controller, a remote control for the displays, as well as others accessories. This insert is easily removed in one piece for quick access to the displays. A foldable hand truck is packaged on the underside of the case lid. It is held in place by adjustable quick release straps. The hand truck can be easily removed from the case and then the lid can be replaced for final transport of all the cases to the test site.

51

6

6.1

Operation

Set-up

HARVII can be carried by two people and set-up and operated by one person, although a second user will make the task easier. The first step in using HARVII is unpacking and physically assembling the hardware components. The tripod must be assembled and positioned, and then the wedge and tracking mount can be mounted. Next, the telescopes and lenses are attached followed by the cameras, filter wheel, and any related equipment. The computer case must be opened to gain access to all the buttons and connections. Finally, the display case can be set on the top of the computer case to form a mobile workstation for the operator to use. The next step is to make all the electrical connections to the hardware. Once all of the connections have been made, any hanging cables should be secured to their respective equipment with cable ties. This will prevent any unwanted disconnections during operation. Figure 6-1 shows the complete set-up of the system. After all the cables are properly connected and secured, the system can be powered up. HARVII can be powered by a standard 110 VAC wall socket or by its own 1000W electric power generator. Upon powering the system some components will automatically turn on while others will have to be switched on manually. Everything should now be turned on and verified to be working correctly. Now the alignment of the telescopes and lenses should be checked and adjusted if necessary.

52

Orion Telescope Wide-angle Lens

75-mm Acquisition Lens

RITS

Display Case

Tracking Mount Computer Case

Figure 6-1: HARVII

6.2

Tracking System 6.2.1

Manual Control

The tracking mount can be manually controlled by a hand-held button pad control box (Figure 6-2). The button pad has four directional buttons designed to slew the

53

telescopes in a “North, South, East, West” orientation. The North and South buttons move the declination motor in opposite directions, and the East and West buttons move the right ascension motor in opposite directions. There is an LCD display in the upper center of the control box. The lower left button is the Mode button. The mode is displayed on the LCD display. When the display is blank, the mount is in its normal operating mode. There are two wheels on the right side of the control box. The upper wheel controls the speed at which the mount moves. The lower wheel and the lower right button on the front of the control box are not functional at this time.

LCD Display Slew Speed Adjustment Wheel Directional Buttons

Mode Button

Not Functional

Figure 6-2: Tracking Mount Control Pad

54

6.2.2

Computer Control

Automatic tracking is made possible by the C-Sat tracking software package installed on the tracking computer. C-Sat is a satellite tracking program that was developed by Celestial Computing Inc., a company based in Boston, MA that specializes in advanced telescope tracking systems. The software communicates with the tracking mount using the Astronomy Command Language (ACL), a standard serial command protocol developed by Meridian Controls to control astronomical devices. Tracking can be accomplished by C-Sat using two methods. The first method applies to artificial satellites in orbit around Earth. It utilizes the Simplified General Perturbation (SGP) 4 orbital model and two-line orbital element (TLE) sets developed by the North American Aerospace Defense Command (NORAD). TLE sets are available for thousands of orbiting objects and are updated regularly. When used with the NORAD SGP4 orbital model, TLE sets very accurately predict the location of orbiting objects. TLE sets are available for the shuttle on every mission, but the model is not valid for the deorbit and reentry stage of the mission. Also, the model is not valid for aircraft. Tracking these types of flight vehicles is made possible by C-Sat’s video tracking capability. Video cameras are attached to the tracking mount, and the video is fed to the software by a framegrabber installed inside the computer. The software uses object recognition algorithms to locate the object within its video window. It then uses regression algorithms coupled with feedback responses to accurately model and predict the object’s motion and keep the object centered within the video display.

55

6.2.3

Software Set-up

Figure 6-3: C-Sat Main Window

Once the program is opened the C-Sat main window will appear on the screen (Figure 6-3). The user must now configure all of the settings. The telescope settings are accessed through the Settings… selection under the Telescope menu.

56

Figure 6-4: C-Sat Telescope Setup Window

The telescope type is ACL, the port must be set correctly, and the baud rate is 9600. The soft limits and elevation limits can be set if desired, but they are not required. The video configuration window is accessed by the Settings… selection under the Video menu.

57

Figure 6-5: C-Sat Video Settings Window

The user must select the appropriate BNC connector on the framegrabber for each tracking camera. The desired NTSC video size can be chosen. The Enable Camera 2 box should be checked. The remaining setting will be automatically set during the video calibration. The tracking settings are accessed through the Settings… selection under the Tracking menu.

58

Figure 6-6: C-Sat Tracking Settings Window

The desired tracking update rate and keypad boost distance can be input, 3 hz and 60.0” are good settings respectively. The feedback imager should be set to Video. The synch star settings are for use with the satellite-tracking portion of the software and can be ignored for this application. The initial feedback settings are adjusted by selecting the Feedback Settings… option under the Tracking menu.

59

Figure 6-7: C-Sat Feedback Settings Window

Video Camera 1 and Video Camera 2 correspond to the two cameras used for tracking. The CCD Camera box can be ignored. The proper percentage values must be selected with regard to the expected flight characteristics of the target vehicle. Higher values make the mount more sensitive and less stable, and they should only be used for vehicles that are expected to abruptly change direction or accelerate rapidly. For preliminary testing on commercial aircraft flying in straight flight paths at constant velocities, values of 30%, 30%, and 2% for Bias, Center, and Center Bias, respectively were used. The Enable Center Bias box should be checked for both cameras. The Auto-Switch Feedback Mode can be selected if desired. This will let the software automatically switch the tracking modes at the desired times and not require the user to do it manually. For the Auto-Switch Feedback Mode, a Bias to Center Time of 6 s is appropriate, and a Center to 60

CenBias Distance of 30” is usually adequate. These settings should also be adjusted according to the flight characteristics of the target vehicle.

Now the user may establish a link between the software and the telescope by clicking the “Connect to Telescope” button

. Next the video framegrabber board must

be activated by clicking the “Activate Video Board” button Video Window” button

. Clicking the “Open

will open the on-screen video display.

A video calibration must now be performed so that the tracking software can determine the scale and orientation of the tracking cameras. The calibration must be performed on a distant ‘quasi-stationary’ object, preferably a star. The video calibration is initiated by selecting Calibrate Camera… under the Video menu. The on-screen instructions will guide the user through the calibration procedure. During the calibration procedure, the software will ask the user to move the object to the target pixel location within the video windows for each camera. The user should center the object in the Orion telescope’s FOV when selecting the target pixel location for the acquisition lens. This will allow for an easy transition from the 6° FOV to the 1° FOV once the vehicle is acquired. Similarly, the user should center the object within the infrared camera’s FOV when selecting the target pixel location for the Orion optical package. This will force the target object to be centered in the data acquisition window while the software is tracking from the Orion telescope.

61

6.2.4

Tracking Procedure

Once the video calibration has been performed, the software is ready to video track. A video-tracking run is initiated by selecting Video Track… under the Tracking menu. After the selection, the software will go through a brief “setting up” phase followed by a “waiting” phase. While in the “waiting” phase, the user has complete control of the mount through the button pad controller. Now the user must slew the telescopes to point in the general direction of the target vehicle. Once the target vehicle is visible in the wide-angle display, the user can use that display to manually bring the target into the 6° FOV of the acquisition lens. When the target becomes visible within the video display window, the tracking software will attempt to acquire and track it. If the software is not able to recognize the target, then the user must manually keep the target within the display and click on it with the mouse to show the tracking program which object should be tracked. The tracking software will then begin auto-tracking the object. Before switching cameras from the acquisition lens FOV to the Orion telescope FOV, the target must be centered and stabilized by using the different tracking modes. The C-Sat software can operate in three different video feedback modes. The three modes are Bias, Center, and Center Bias (CenBias). While in the Bias mode, the software will attempt to correct for any changes in velocity or direction of the target vehicle but not attempt to center the vehicle at the target pixel location. The Center mode will attempt to keep the vehicle at the target pixel location without making any biasing corrections. The CenBias mode will attempt to keep the vehicle at the target pixel location by utilizing both biasing and centering impulses.

62

The video feedback mode is displayed within the Feedback Settings section of the C-Sat main window. The mode can be changed by pressing C on the keyboard or the software will automatically attempt to change the mode if the Auto-switch Feedback Mode box is checked within the Feedback Settings window. When the target vehicle is initially acquired, the software will always be in Bias mode. As soon as acquisition is confirmed, the mode should be changed to Center to bring the vehicle to the target location in the video window, then the mode should quickly be changed to CenBias and remain there for the rest of the tracking run. As discussed above, each mode has a corresponding percentage coefficient that controls the strength of the corresponding feedback response. These percentages are also displayed in the Feedback Settings section of the main window. Their initial values are set before tracking begins, but they can be adjusted during tracking to compensate for any changes in the flight pattern. The percentages can be adjusted in 5% increments, and the following table shows the controls and their actions.

63

Control

Action

Insert

Bias +5%

Delete

Bias –5%

Home

Center +5%

End

Center –5%

Page Up

CenBias +5%

Page Down CenBias –5% Figure 6-8: C-Sat Controls

Once the mount is tracking well with the 75mm acquisition lens, the user will need to switch cameras so that the software can begin more precise tracking with the Orion optics. When the target is visible and relatively stable on the Orion display, the user can switch over to that camera by pressing the space bar. Once the space bar is pressed, the C-Sat video window will switch to the display from camera 2. The camera indicator will also switch from 1 to 2 in the C-Sat main window. The user now needs to click on the target with the mouse so that the software will know which target it was tracking at the previous scale. Once the vehicle is brought to the target location for camera 2, it should be within the FOV of the data acquisition optics. If the vehicle happens to be off-center in the data acquisition display, the target pixel location can be adjusted to bring it to the center. Each press of the arrow keys will change the target pixel location by one pixel in

64

the corresponding direction. The arrow keys will often have to be used to actively keep the vehicle centered in the data acquisition window. If a tracking attempt has failed or needs to be stopped, the track can be aborted at any time by pressing the Esc key. The vehicle can then be reacquired manually, and the tracking procedure can be restarted.

65

6.3

Data Acquisition System

6.3.1

Overview

Digital images are obtained as output from the Merlin NIR camera in 12-bit format through a digital video cable that connects to the Bitflow framegrabber on the data acquisition computer. These images are captured at up to 30 Hz and saved directly to the high-speed disk drive. To collect useful data for this application, the filter wheel must align the proper filter with the optical path as each image is acquired. A custom software package was designed and developed to completely automate the data acquisition process. Robert Marshall of the Advanced Sensing and Optical Measurement Branch at NASA-Langley Research Center wrote the software code. There were many factors to consider when designing the data acquisition system. As the light gathered by the telescope travels through the optical system, it goes from an 11-inch beam as it enters the telescope to a point at the focal point of the optical system. At the location of the filter wheel within this optical path, the beam of light is 0.22-inches wide. The filters are 1-inch in diameter; therefore, there is a 0.78-inch window for each frame to be captured. As the filter wheel rotates at 5 rev/sec, each filter completely covers the optical path for 19.9 ms. A scaled drawing of the filter wheel is shown in the following figure. The detector is centered in the optical path and is stationary. The wheel rotates the six filters in front of the detector.

66

5 revs/sec

Detector

Figure 6-9: Filter Wheel

At a 30 Hz frame rate, the system has 33.3 ms to complete the process of acquiring an image, saving it to disk, and preparing for the next image. These parameters impose limits on the camera integration time and the data transfer rate from the camera to the hard disk. Also, a high-speed, precision-controlled filter wheel is needed to ensure proper alignment of each filter with the optical path. HARVII’s RITS was designed to be used with a 4 ms integration time, which is well within the 19.9 ms limit. Also, a high-speed video drive is employed resulting in a total data transfer time of 15 ms for each frame. This results in a total time requirement

67

of 19 ms from frame to frame for the data acquisition system. Therefore, the current limit on the speed of the system is the camera’s maximum frame rate of 30 Hz. The filter wheel’s motor is equipped with an encoder with a one-arc minute resolution. This encoder continually keeps the software informed of the position of the wheel. The software, in-turn, triggers the camera to acquire each image at the appropriate time. The encoder does have a small amount of uncertainty associated with it of about 10 counts per revolution. This results in an error of less than 0.05% per revolution, and is easily corrected for with the use of an optical trigger. The wheel is equipped with a light emitting diode and a photodiode receiver, which work as an optical trigger sending a pulse to the software each time the wheel makes exactly one revolution. The software then does a simple regression to correct for any errors in the wheel’s motion. 6.3.2

Operation

The data acquisition software program is started by double-clicking the HARVII icon on the desktop of the data acquisition computer. The user interface window shown in the following figure will appear.

68

Figure 6-10: Image Acquisition Program

The user can input a file name for the images in the sequence that will be acquired. An Auto Name feature is also available, and it will automatically name each image file according to its wavelength/attenuation filter combination. Each file name will also contain the exact time and date that the file was created. The high-speed hard drive has been assigned the drive letter E and should always be used as the output directory. The user can save the image files under any desired subfolders within E:\. The frame rate has been made adjustable through the user interface. The user can select any frame rate up to 30 frames/second, but the maximum rate of 30 should be used whenever possible. Acquisition is initiated by pressing the START button. Once the button is pressed, the wheel will begin spinning and sending position information back to the computer. The computer will then trigger the camera to capture images through each filter. The software will continually collect data until the STOP button is pressed.

69

7

7.1

System Testing

Laboratory Testing

A simulated sky was set-up in a laboratory initially to develop and test the tracking system. A 3-foot by 2-foot piece of foam-core board was hung by a pulley from the wall and connected to a small DC gear motor that could lift and lower the board at variable speeds. Different targets were cut out of tape and put on the board to simulate airplanes, the shuttle, and stars. Several combinations of targets and backgrounds were used including black targets on a white background, white targets on a black background, and retro-reflective targets on both white and black backgrounds. The targets were imaged through the zoom lens, the fixed focal length tracking lens, and the Orion telescope with the help of a focus extender. The C-Sat tracking software was able to recognize all targets on each background and calibrate the scale of the image to determine how many arc-seconds of movement corresponded to each pixel for each optical configuration. Then the targets were successfully tracked at various operating conditions. Because of the limited size of the laboratory, the tracking mount could be positioned from 8-feet up to 15-feet from the targets. The targets had a total range of movement of about 7-feet at variable speeds from 0.015 feet per second up to 0.35 feet per second. These parameters correspond to a tested angular tracking speed range from 3 arc-minutes per second up to 2-degrees per second for the Archimage mount and C-Sat software.

70

7.2

Field-Testing 7.2.1

Single-Camera Tracking

Initial field-testing of the tracking system was performed on commercial aircraft flying overhead. The initial version of the tracking software only allowed for a single camera and optical system to be used. This meant that the FOV must be large enough to allow the software to recognize the target before it moved out of view, and the FOV must be small enough to stabilize the target within the much smaller FOV of the data acquisition system. Preliminary attempts were made to track objects using the wide-angle zoom lens set at a focal length of 200-mm. With a standard 1/3” format CCD camera, this gave a FOV of just over 2°. It proved to be rather difficult to manually keep the target within the FOV until the software could take over. Four successful tracks were achieved out of approximately thirty attempts. Next the zoom lens was set to the minimum focal length of 28-mm. This gave the tracking optics a FOV of 21°. The camera now covered a large portion of the sky and targets could be acquired with ease. However, due to the greatly reduced resolution, the target vehicle became much less stable within the video display. While successful tracking was now possible on nearly every attempt, the target could not be stabilized within the data acquisition window. The obvious solution was to set the focal length somewhere in-between. However, the zoom lens did not have a locking mechanism for the focal length adjustment, and any change in focal length would cause the tracking system’s calibration to become invalid. 71

It was then decided to add an additional camera and lens to the tracking system. The new lens had a fixed focal length of 75-mm, which provided a FOV of 6°. This FOV allowed for easy acquisition of the targets and had the potential capability of tracking objects stably within the 472 arc-second FOV of the data acquisition system. The following figure shows the FOV of the data acquisition camera nested inside the FOV of the tracking camera.

Figure 7-1: Infrared FOV nested inside tracking FOV

72

The FOV of the tracking camera is 46 times larger than the FOV of the NIR camera. The next figure shows a magnified view of the NIR camera’s FOV from Figure 7-1 compared to the individual pixels of the tracking camera.

Figure 7-2: Infrared FOV with respect to the pixels of the tracking camera

In Figure 7-2, the large rectangle represents the FOV of the infrared detector, and the smaller rectangles represent the individual pixels of the CCD tracking camera. As shown, the NIR camera’s FOV only encompasses about 16 pixels across the CCD detector. This results in a requirement for a very precise tracking system. Because it is desired that the target vehicle fill about 75% of the detector, the required tracking system accuracy is +/- 2 pixels. If the tracking system cannot hold the target vehicle within 2

73

pixels in any direction than it may partially drift out of the FOV of the data acquisition system resulting in lost data. The initial tracking runs used four cameras with the four optical packages. The wide-field lens was set at a focal length of 28mm. This lens provided a 21° FOV that was displayed on a monitor and used for initial visual acquisition of aircraft in flight. The 75mm focal length lens was used for all of the computer tracking. A camera was connected to the Orion telescope so that its view could be seen on a monitor. Video taken through the Celestron telescope was also shown on a monitor and fed to a digital video recorder for later analysis of the tracking accuracy. Once an airplane was observed flying overhead, a tracking run was initiated and the mount was slewed manually to point the optical tubes toward the airplane. Visual acquisition of the target was easily achieved through the wide-angle lens due to the 21° FOV. Once the target was visible on the wide-angle display, it was manually centered to bring the target into the FOV of the tracking camera. The tracking software then latched onto the airplane and began auto-tracking. Several days of tests were performed using the set-up and method described above. All of the tracking attempts were made on random aircraft that were flying on unknown flight paths. The acquisition segments of the tracks were very successful. The software had the ability to recognize and latch onto virtually every airplane that passed into the FOV of the tracking camera. However, it was more difficult to stabilize the target within the FOV of the Celestron telescope. Airplanes could generally be kept within the FOV for short periods of time (20-30 sec) before fully or partially drifting out of view. The airplanes could usually be brought

74

back into the FOV within about 5-10 sec after being lost. Several attempts were made where the airplane was kept within the FOV for periods of up to 6 minutes with only small sections of the plane drifting out of view for short periods of time (<5 sec). While these results were better than expected, the tracking system was still not as reliable and user-friendly as desired.

7.2.2

Two-camera Tracking

To improve the tracking capability of the Archimage mount, it was requested that Celestial Computing develop a more advanced version of C-Sat. The software needed the ability to acquire and begin tracking vehicles using one camera then switch to a second camera with a smaller FOV for more precise tracking. The desired modifications were made to the software, and an improved version of C-Sat was released soon thereafter. The 6° FOV could still be used to latch onto the target vehicle, but now it only had to hold the vehicle steady in the 1° FOV of the medium field Orion telescope. This medium field optical system could then take over the tracking process and be used to center the vehicle in the FOV of the data acquisition system. The following figure shows a scaled representation of the three FOV’s nested within each other. The outside rectangle is the FOV of the 6° acquisition camera; the middle rectangle represents the FOV through the Orion telescope: and the inner rectangle is the FOV of the thermal camera.

75

Figure 7-3: Relative FOV's for the tracking and thermal cameras

The FOV of the acquisition camera is now six times larger than the FOV of the medium field camera, which is roughly seven times larger than the FOV of the NIR camera. This results in about seven times greater resolution in each FOV with respect to the pixels in the next larger FOV. The following figures show how adding one more camera stage in the tracking procedure can greatly increase the effective tracking resolution. The Orion camera’s FOV encompasses 114 pixels of the acquisition camera, and the NIR camera’s FOV covers 112 pixels of the Orion camera. The effective resolution within the NIR FOV (Figure 7-5) has been increased by seven times as compared with Figure 7-2.

76

Figure 7-4: Orion FOV with respect to the pixels of the acquisition camera

Figure 7-5: NIR FOV with respect to the pixels of the Orion camera

77

This creates about seven times more flexibility in the tracking system. The tracking software is now only required to keep the target object within 14 pixels in each direction from the target pixel location. This means that the vehicle can drift up to 28 pixels within the tracking FOV without ever leaving the FOV of the thermal camera.

Many tracking attempts were made over a period of several days with the twocamera tracking configuration. The sequence of events corresponding to a successful tracking run is as follows. First, an airplane flying overhead is visually spotted with the naked eye. Next, the mount is manually slewed with the button pad controller until the optical tubes are pointed in the general direction of the target airplane and it can be seen on the wide-angle display. Next, a video-tracking run is initiated, and the target vehicle is manually brought into the FOV of the 6° acquisition optics. As the vehicle crosses the FOV of the acquisition camera, the tracking software latches onto it and begins autotracking. The software then brings the vehicle to the target pixel location of the C-Sat video display. Once stabilized at the target pixel location, the target vehicle is visible in the FOV of the medium field optics. The tracking camera is then switched to camera 2 by pressing the space bar, and the C-Sat video display changes from the 6° FOV to the 1° FOV. Next, the user clicks on the target vehicle within the new 1° video display, and the software continues tracking at the higher magnification. C-Sat then stabilizes the vehicle at the target pixel location for the medium field camera. The user then checks to verify that the vehicle is completely within the FOV of the thermal camera and makes any

78

necessary adjustments to the target pixel location to keep the vehicle centered for the remainder of the tracking run. The following figures are some images acquired of commercial aircraft as they passed over Hampton, VA in November 2003.

Figure 7-6: Boeing 747

Figure 7-7: Boeing 737

79

Figure 7-8: Boeing 737

Figure 7-9: Boeing 737

80

All images were taken from digital video recordings of various tracking runs using the two-camera version of C-Sat. The video was acquired by a visual CCD video camera that was connected to the 11-inch Celestron telescope with a Barlow lens. Figure 7-6 and Figure 7-7 each have a horizontal FOV of 317 arc-seconds. Figure 7-6 is a Boeing 747 commercial airliner flying at an estimated range of 6 miles from the ground tracking location. Good detail can be seen in the two rows of windows and the lettering. Figure 7-7 is believed to be a Boeing 737 at a slant range of approximately 17 miles. Figure 7-8 and Figure 7-9 each have a horizontal FOV of 391 arc-seconds. Each image appears to be a Boeing 737 commercial airliner. They both have an estimated slant range of 12 to 14 miles from the tracking location.

81

8

8.1

Concluding Remarks

Objectives of this Research

The purpose of this research was to deliver an operational system capable of tracking in-flight vehicles and returning highly accurate surface temperature maps. At the start of the work leading to this thesis, each component of the system was functional in only a limited form. The IR camera could only acquire very short sequences of images. The filter wheel had not yet been received, let alone any attempt made to integrate it into the system. Before the integration of the different components of the data acquisition system, the individual filters had to be manually placed into the optical path of the imaging system, and there was no way to acquire high-speed images at two different wavelengths. The tracking software had yet to be developed by an outside company. Therefore, once it was received, it needed to be evaluated and tested with this specific system to show that it was capable of performing adequately for the required application of this system. This involved countless hours of testing of the tracking system and constant correspondence with the software developers in order to improve the software and achieve the desired performance. The work with the tracking system led to many changes and improvements to the system. These include an upgraded computer system, an additional tracking camera and lens combination, and a more advanced tracking software package that provides seven times better stability than the original version.

82

The work leading to this thesis involved the complete design and production of the entire packaging system. This research also led to the addition of several hardware components to the system as well as many features that have further improved the system. These include rackmount computers, which are extremely robust and convenient for field use, the creatively designed display rack, which can be set up in the field in a matter of seconds, and the addition of an electric power generator for portability purposes. The system now includes two lightweight foldable chairs for the operators to use, a lightweight foldable handtruck for easy movement of the cases over short distances, and an easily assembled cart that becomes a base for the mobile workstation. HARVII now is completely functional and has a greatly simplified operational procedure compared to systems with similar capabilities, while it also delivers more accurate data at a fraction of the cost. This thesis explains how the thermography system was developed using the fundamental theories of radiation heat transfer. Each component of the system is described in detail as well as how the hardware and software was integrated to construct the two independent subsystems. The RITS has been calibrated and verified to give absolute surface temperatures of gray or black bodies with an accuracy of over 99% with an estimated additional 1% error possible due to atmospheric absorption. A complete discussion of the calibration procedure including a detailed error analysis is available in reference 9. Each component has been packaged in a strategic manner in one of the five shipping cases. Each case is below the maximum allowable weight of 150 pounds. The packaging has also been designed to further simplify the set-up and operating procedures. An operational procedure has been

83

developed and well documented. Finally, the system has undergone some preliminary testing to prove its operability.

8.2

Alternate Applications and Future Work

The current system is designed to give high-resolution images of the shuttle orbiter from a maximum slant range of about 280,000 feet down to a minimum distance of about 140,000 feet. However, different optical configurations can always be employed to allow HARVII to image objects over a much larger range. Other possible applications include the X-43A flights schedule for 2004 and also the hypersonic demonstrator vehicle scheduled to be flown from Wallops Island in 2007. HARVII is not limited to tracking hypersonic vehicles. It can be used to track almost anything that is overhead and moving at less than 5° per second. Such objects could include artificial satellites, meteors, missiles, aircraft, birds, and many others. HARVII is also not limited to imaging in the NIR wavelength range. Although the RITS requires the NIR wavelengths and is crucial to this system, an alternate system could be developed that used the tracking system and imaged objects in the visible or other spectra at very high resolutions. The dynamic temperature range of the RITS can also be modified to allow the measurement of other objects that are out of the temperature range of the current HARVII. One way to adjust the dynamic range of HARVII is to change the integration

84

time of the Merlin NIR detector. The length of the integration time corresponds to the number of photons allowed to reach the detector for each image taken. A new RITS system would use computer algorithms to automatically adjust the integration time to accommodate the temperature of the test object. This would eliminate the need for the attenuation filters, and a 2-position filter wheel could replace the 6-position wheel. This would also increase the effective frame rate of the RITS to a full 15 Hz for the entire tracking run. Furthermore, because the determination of temperature has been shown to be independent of the camera integration time, the three intensity-ratio equations would be reduced to just one equation. Another way to allow HARVII to resolve lower temperature is by upgrading the NIR optical package to use a new 12-inch telescope instead of the 11-inch telescope. A Meade LX-1200 Schmidt-Cassegrain telescope has already been received, and work is in progress to adapt it to HARVII. The larger aperture will allow 19% more light to reach the detector face resulting in readable signals for lower temperatures. The 12-inch Meade uses a BK-7 Schmidt corrector plate as opposed to the float glass corrector plate present in the Celestron 11-inch telescope. The BK-7 plate is coated for the NIR wavelengths and has essentially the same transmission and index of refraction across the 1.3-micron to 1.8-micron band. This eliminates the need for the optical efficiency correction factor that must be used with the Celestron telescope. The elimination of the optical efficiency correction factor also eliminates the associated uncertainty factor of about 0.5% to 0.7% depending on the attenuation pair, thus reducing the overall error associated with the system and simplifying the data handling procedure.

85

HARVII can also be further simplified by eliminating the need for two separate computer systems. New drivers now available for the Merlin NIR camera can allow the data acquisition software to operate under the Windows 2000 operating system. Minor modifications can be made to the software code controlling the NIR camera and filter wheel to convert it to perform with Windows 2000. A single computer system will be less costly and even easier to use and transport than the current system.

86

References

1. NASA, Project Plan for Reusable Launch Vehicle Focused Technology, TDISPLN-001 Baseline, December 1999. 2. D. W. Banks, Visualization of In-flight Flow Phenomena Using Infrared Thermography, NASA TM-2000-209027, July 2000. 3. B.A. Conway, NASA Programs for Advanced Sensors and Measurement Technology for Aeronautical Applications, ICAS-90-2.2.1, 1990. 4. A. Bhungalia, Thermal Protection System Analysis in an Integrated Design Environment for Launch Vehicles, AIAA A02-13919, 2002. 5. R. C. Blanchard, Shuttle Orbiter Fuselage Global Temperature Measurements from Infrared Images at Hypersonic Speeds, AIAA 2002-4702, 2002. 6.

T. Bozung, Space Shuttle Infrared Image Calibration Using Thermocouple Data, Master’s Thesis, George Washington University, May 2001.

7. B. Hollis, X-33 Computational Aeroheating Predictions and Comparisons with Experimental Data, AIAA 99-3559, 1999. 8. G. Gaussorgues, Infrared Thermography, Chapman and Hall, New York, 1994. 9. J. Hundley, A Thermography System for Imaging Reusable Launch Vehicles, Master’s Thesis, George Washington University, January 2002. 10. Indigo Systems, Merlin InGaAs NIR Camera User’s Manual, Indigo Systems, Version 1.00-414-0040-10, 2000.

87

The HARVII system consists of several subsystems ...

A system and method for tracking an in-flight vehicle from a ground location is presented ... Marshall for his invaluable assistance with the software development.

1MB Sizes 1 Downloads 137 Views

Recommend Documents

Subsystems of Second Order Arithmetic
Feb 7, 2006 - 1-CA0; which in turn correspond to classical foundational programs: con- structivism ... A preliminary version of this book was written with a software pack- age called .... 173. V.1. Countable Well Orderings; Analytic Sets .

Subsystems of Second Order Arithmetic
Feb 7, 2006 - A preliminary version of this book was written with a software pack- .... Countable Well Orderings; Analytic Sets . ...... comparison map, 183.

Hall's marriage theorem and subsystems of second ...
Nov 18, 2011 - there is a solution of G if it satisfies Hall condition. Theorem (M. ..... BMTsym[P]: If G = (B = {bi|i ∈ N},G = {gi|i ∈ N};R) be a bipartite graph which ...

Hall's marriage theorem and subsystems of second ...
Nov 18, 2011 - Then G has a solution by infinite marriage theorem. Now can we take ... There exists a recursive bipartite graph that satisfies Hall condition, but ...

The Development of Several Algorithms to Improve Test Case ...
Izzat Alsmadi. Yarmouk University. CIS department ..... Page 3 of 4. The Development of Several Algorithms to Improve Test Case Generation1569166482.pdf.

A copy of several Reverse Mathematics
A copy of the RM of WKL0. A constructive copy. Big questions. Future research. References. Stratified NSA. In classical NSA, a number is either finite or infinite.

27th April, 2010 Copy of Google's submission today to several ...
Apr 27, 2010 - geolocation API. How does the technology work? Visibly attached to the roof of each vehicle is a commercially available Maxrad BMMG24005.

The Islamic System of Government The Islamic System ...
policy, political parties, government income, education, health, wealth, crime, freedom ... if not extinct. For example under the leadership of the Prophet Muhammad ... To do this s/he must either: • gain sufficient .... 18 M. B. Majlesi (ed.), “

Interconnections between economic subsystems and IT ...
The scheduler is responsible for sending a job to a given computer to be executed. In the simplest case, the scheduler may assign jobs in a round-robin fashion. Other schedulers may implement priority systems and may use the queuing theory results to

'System' in the International Monetary System - The National Bureau of ...
Paper prepared for the Conference on “Money in the Western Legal Tradition”, ... Bretton Woods System were intentional in building an international monetary ...

Theory of a resonant level coupled to several conduction-electron ...
Mar 12, 2007 - of conduction-electron channels. The Coulomb interaction acts between the electron on the impurity and in the different channels. In the case of ...

A COPY OF SEVERAL REVERSE MATHEMATICS ...
Reverse Mathematics is a program in the foundations of math- ... Its aim is to determine which minimal axioms prove theorems of or- ... Symbolic Logic 12. (2006) ...

Nonstandard counterparts of several weak axioms
Ss: range of Xs,Ys,..., subsets of N. S ... Vs = (Ms,Ss;0s,1s,. ...... ERNA. J. Symbolic Logic 73 (2):689–710, 2008. H. Jerome Keisler, Nonstandard arithmetic and ...

After several decades of neglect, consciousness has ... - PhilPapers
center of philosophical and scientific analyses of mind. Much recent ... Methodologically, James and Dewey call for an analysis of consciousness concerned ... conscious experience of the human world is a kind of social interaction for Dewey.

A review of Subsystems of Second Order Arithmetic ... - PSU Math Home
Page 1. A review of. Subsystems of Second Order Arithmetic by Stephen G. Simpson. Peter Cholak. This is the first book to appear on “reverse mathematics”. The book provides an excellent introduction to the area and is packed with some in- teresti

pdf-1326\optical-fiber-sensors-components-and-subsystems-from ...
Whoops! There was a problem loading this page. pdf-1326\optical-fiber-sensors-components-and-subsystems-from-brand-artech-house-publishers.pdf.

After several decades of neglect, consciousness has ... - PhilPapers
For instance, James and. Dewey both developed rich ... substantive philosophical tradition, one finds a host of views admirably represented. However, a broad ...

Theory of a resonant level coupled to several conduction-electron ...
Mar 12, 2007 - The Coulomb interaction acts between the electron on the impurity and in the different ..... Our data suggest that already for N=4 the position of.