Robinson, J. A., D. A. Liddle, C. A. Evans, and D. L. Amsbury. 2002. Astronaut-acquired orbital photographs as digital data for remote sensing: spatial resolution. International Journal of Remote Sensing, 23(20):4403-4438.


Astronaut-acquired Orbital Photographs as Digital Data for Remote Sensing: Spatial Resolution

Click here for the PDF version


Menu Introduction Background Factors that determine the footprint Other characteristics that influence spatial resolution Estimating spatial resolution of astronaut photographs Summary and Conclusions 5. Estimating spatial resolution of astronaut photographs

     In order to use astronaut photographs for digital remote sensing, it is important to be able to calculate the equivalent to an IFOV—the ground area represented by a single pixel in a digitised orbital photograph. The obliquity of most photographs means that pixel 'sizes' vary at different places in an image. Given a ground distance (D) represented by a photograph in each direction (horizontal, vertical) an approximate average pixel width (P, the equivalent of IFOV) for the entire image can be calculated as follows:

          (2)

where D is the projected distance on the ground covered by the image in the same direction as the pixel is measured, d is the actual width of the image on the original film (table 1), and S is the digitising spatial resolution.

     Here, we present three mathematical formulations for estimating the size of the footprint, or area on the ground covered by the image. Example results from the application of all three formulations are in table 5. The first and simplest calculation (formulation 1) gives an idea of the maximum spatial resolution attainable at a given altitude of orbit with a given film format and a perfectly vertical (nadir) view downward. Formulation 2 takes into account obliquity by calculating look angle from the difference between the location on the ground represented at the centre of the photograph, and the nadir location of the spacecraft at the time the photograph was taken (figure 1). Formulation 3 describes an alternate solution to the oblique look angle problem using coordinate-system transformations. This formulation has been implemented in a documented spreadsheet and is available for download (http://eol.jsc.nasa.gov/sseop/Low_Oblique_301_Locked.xls) and has been implemented on our Web-based user interface to the Astronaut Photography Database.

     Although formulations 2 and 3 account for obliquity, for purposes of calculation they treat the position of the spacecraft and position of the camera as one. In actuality, astronauts are generally holding the cameras by hand (although cameras in bracketed in the window are also used), and the selection of window, position of the astronaut in the window, and rotation of the camera relative to the movement of the spacecraft are not known. Thus, calculations using only the photo centre point (PC) and spacecraft nadir point (SN) give a locator ellipse and not the locations of the corners of the photograph. A locator ellipse describes an estimated area on the ground that is likely to be included in a specific photograph regardless of the rotation of the film plane about the camera's optical axis (figure 6).

Figure 6     Estimating the corner positions of the photo requires additional user input of a single auxiliary point--a location on the image that has a know location on the ground. Addition of this auxiliary point is an option available to users of the spreadsheet. An example of the results of adding an auxiliary point is shown in figure 7 with comparisons of the various calculations in table 5.


5.1. Formulation 1. Footprint for a nadir view.
     The simplest way to estimate footprint size is to use the geometry of camera lens and spacecraft altitude to calculate the scaling relationship between the image in the film and the area covered on the ground. For a perfect nadir view, the scale relationship from the geometry of similar triangles is

          (3)

where d = original image size, D = distance of footprint on the ground, f = focal length of lens, and H = altitude. Once D is known, pixel size (length = width) can be calculated from equation 2. These calculations represent the minimum footprint and minimum pixel size possible for a given camera system, altitude and digitising spatial resolution (table 2). Formulation 1 was used for calculating minimum pixel sizes shown in tables 2 and 4.

Table 4     By assuming digitising at 2400 ppi (10.6 mm/pixel), currently a spatial resolution commonly attainable from multipurpose colour transparency scanners (see section 4.4.1, above), we used this formulation to convert area covered to an IFOV equivalent for missions of different altitudes (table 2). Table 4 provides a comparison of IFOV of images from various satellites, including the equivalent for astronaut photography. Values in this table were derived by using formulation 1 to estimate the area covered because a perfect nadir view represents the best possible spatial resolution and smallest field of view that could be obtained.

5.2. Formulation 2. Footprint for oblique views using simplified geometry and the great circle distance
     A more realistic approach to determining the footprint of the photograph accounts for the fact that the camera is not usually pointing perfectly down at the nadir point. The look angle (the angle off nadir that the camera is pointing) can be calculated trigonometrically by assuming a spherical earth and calculating the distance between the coordinates of SN and PC (figure 1) using the Great Circle distance, haversine solution (Sinnott 1984, Snyder 1987:30-32, Chamberlain 1996). The difference between the spacecraft centre and nadir point latitudes, Dlat = lat2 - lat1, and the difference between the spacecraft centre and nadir point longitudes, Dlon = lon2 - lon1, enter the following equations:

Figures

Assuming that the camera was positioned so that the imaginary line between the centre and nadir points (the principal line) runs vertically through the centre of the photograph, the distance between the geometric center of the photograph (principal point, PP) and the top of the photograph is d/2 (figure 8, A). The scale at any point in the photograph varies as a function of the distance, y, along the principal line between the isocentre and the point according to the relationship

Seven           (7)

Figure 8 (Wong 1980, equation 2.14, H >> the ground elevation, h ). Using figure 8, A, at the top of the photo,

Eight           (8)

and at the bottom of the photo,                   (9)

Thus, for given PC and SN coordinates and assuming a photo orientation like figure 8A and not figure 8B, we can estimate a minimum D (using equations 7 and 8) and a maximum D (using equations 7 and 9), and then average the two to determine the pixel size (P) via equation 2.

5.3. Formulation 3. The Low Oblique Space Photo Footprint Calculator.
The Low Oblique Space Photo Footprint Calculator was developed to provide a more accurate estimation of the geographic coordinates for the footprint of a low oblique photo of the Earth's surface taken from a human-occupied spacecraft in orbit. The calculator performs a series of 3-dimensional coordinate transformations to compute the location and orientation of the centre of the photo exposure plane relative to an earth referenced coordinate system. The nominal camera focal length is then used to create a vector from the photo's perspective point, through each of eight points around the parameter of the image, as defined by the format size. The geographical coordinates for the photo footprint are then computed by intersecting these photo vectors with a spherical earth model. Although more sophisticated projection algorithms are available, no significant increase in the accuracy of the results would be produced by these algorithms due to inherent uncertainties in the available input data (i.e. the spacecraft altitude, photo centre location, etc.)

     The calculations were initially implemented within a Microsoft Excel workbook, which allowed us to embed the mathematical and graphical documentation next to the actual calculations. Thus, interested users can inspect the mathematical processing. A set of error traps was also built into the calculations to detect erroneous results. A summary of any errors generated is reported to the user with the results of the calculations. Interested individuals are invited to download the Excel workbook from http://eol.jsc.nasa.gov/sseop/Low_Oblique_301_Locked.xls. The calculations are currently being encoded in a high level programming language and should soon be available alongside other background data provided for each photograph at Office of Earth Sciences (2000).

     For the purposes of these calculations, we defined a low oblique photograph as one with the centre within 10 degrees of latitude and longitude of the spacecraft nadir point. For the typical range of spacecraft altitudes to date, this restricted the calculations to photographs in which Earth’s horizon does not appear (the general definition of a low oblique photograph, e.g. Campbell 1996:71, and figure 4).

5.3.1 Input data and results
     Upon opening the Excel workbook, the user is presented with a program introduction providing instructions for using the calculator. The second worksheet tab ("How-To-Use"), provides detailed step-by-step instructions for preparing the baseline data for the program. The third tab ("Input-Output") contains the user input fields and displays the results of the calculations. The additional worksheets contain the actual calculations and program documentation. Although users are welcome to review these sheets, an experienced user need only access the "Input-Output" spreadsheet.

     To begin a calculation the user enters the following information which is available for each photo in the NASA Astronaut Photography Database: (1) SN, geographical coordinates of spacecraft nadir position at the time of photo; (2) H, spacecraft altitude; (3) PC, the geographical coordinates of the centre of the photo; (4) f, nominal focal length; and (5) d, image format size. The automatic implementation of the workbook on the web will automatically enter these values and complete calculations.

     For more accurate results the user may optionally enter the geographic coordinates and orientation for an auxiliary point on the photo, which resolves the camera's rotation uncertainty about the optical axis. The auxiliary point data must be computed by the user following the instruction contained in the "How-To-Use" tab of the spreadsheet.

     After entering the input data the geographic coordinates of the photo footprint (i.e. four photo corner points, four points at the bisector of each edge and the centre of the photo) are immediately displayed below the input fields along with any error messages generated by the user input or by the calculations (figure 7). Although results are computed and displayed, they should not be used when error messages are produced by the program. The program also computes the tilt angle for each of the photo vectors relative to the spacecraft nadir vector. To the right of the photo footprint coordinates is displayed the arc distance along the surface of the sphere between adjacent computed points.

5.3.2 Calculation Assumptions
     The mathematical calculations implemented in the Low Oblique Space Photo Footprint Calculator use the following assumptions:

  1. The SN location is used as exact, even though the true value may vary by up to ± 0.5 from the location provided with the photo.
  2. The spacecraft altitude is used as exact. Although our determination of the nadir point at the instant of a known spacecraft vector is relatively precise (± 1.15 ´ 10-4°), degrees), the propagator interpolates between sets of approximately 10 - 40 known vectors per day, and the time code recorded on the film can drift. Thus, the true value for SN may vary by up to ± 0.1 ° degree from the value provided with the photo.
  3. The perspective centre of the camera is assumed to be at the given altitude over the specified spacecraft nadir location at the time of photo exposure.
  4. The PC location is used as exact, even though the true value may vary by up to ±0.5 ° latitude and ±0.5 ° longitude from the location provided with the photo.
  5. A spherical earth model is used with a nominal radius of 6,372,161.54 m (a common first order approximation for a spherical earth used in geodetic computations).
  6. The nominal lens focal length of the camera lens is used in the computations (calibrated focal length values are not available).
  7. The photo projection is based on the classic pin-hole camera model.
  8. No correction for lens distortion or atmospheric reflection is made.
  9. If no auxiliary point data is provided, the "Top of the Image" is oriented perpendicular to the vector from SN towards PC.

5.3.3 Transformation from Earth to photo coordinate systems
     The calculations begin by converting the geographic coordinates (latitude and longitude) of the SN and PC to a Rectangular Earth-Centred Coordinate System (R-Earth), defined as shown in figure 9 (with the centre of the Earth at [0, 0, 0]). Using the vector from the Earth's centre through SN and the spacecraft altitude, the spacecraft location (SC) is also computed in R-Earth.

Figure 9      For ease of computation, we next define a Rectangular Spacecraft-Centred Coordinate System (R-Spacecraft), as shown in figure 9. The origin of R-Spacecraft is located at SC, with its +Z axis aligned with vector from the centre of the Earth through SN and its +X axis aligned with the vector from SN to PC (figure 9). The specific rotations and translation used to convert from the R-Earth to the R-Spacecraft are computed and documented in the spreadsheet.

     With the mathematical positions of the SC, SN, PC, and the centre of the Earth computed in R-Spacecraft, the program next computes the location of the camera's principal point (PP). The principle point is the point of intersection of the optical axis of the lens with the image plane (i.e. the film). It is nominally positioned at a distance equal to the focal length from the perspective centre of the camera (which is assumed to be at SC) along the vector from PC through SC, as shown in figure 9.

     We next create a third coordinate system, the Rectangular Photo Coordinate System (R-Photo) with its origin at PP, its X-Y axial plane normal to the vector from PC through SC and its +X axis aligned with the +X axis of R-Spacecraft, as shown in figure 9. The X-Y plane of this coordinate system represents the image plane of the photograph.

5.3.4 Auxiliary Point Calculations
     The calculations above employ a critical assumption that all photos are taken with the "top of the image" oriented perpendicular to the vector from the SN towards PC, as shown in figure 9. To avoid non-uniform solar heating of the external surface, most orbiting spacecraft are slowly and continually rotated about one or more axes. In this condition a flight crew member taking a photo while floating in microgravity could orient the photo with practically any orientation relative to the horizon (see also figure 8, B). Unfortunately, since these photos are taken with conventional hand held cameras there is no other information available which can be used to resolve the photo's rotational ambiguity about the optical axis, other then the photo itself. This is why the above assumption is used and the footprint computed by this calculator is actually a "locator ellipse", which estimates the area on the ground what is likely to be included in a specific photograph (see figure 6). This locator ellipse is most accurate for square image formats and subject to additional distortion as the photograph format becomes more rectangular.

     If the user wants a more precise calculation of the image footprint, the photo's rotational ambiguity about the optical axis must be resolved. This can be done in the calculator by adding data for an auxiliary point. Detailed instructions regarding how to prepare and use auxiliary point data in the computations are included in the "How-To-Use" tab of the spreadsheet. Basically, the user determines which side of the photograph is top, and then measures the angle between the line from PP to the top of the photo and from PP to the auxiliary point on the photo (figure 9).

     If the user includes data for an auxiliary point, a series of computations are completed to resolve the photo rotation ambiguity about the optical axis (i.e. the +Z axis in R-photo). A vector from the Auxiliary Point on the Earth (AE) through the photograph perspective centre (located at SC) is intersected with the photo image plane (X-Y plane of R-Photo) to compute the coordinates of the Auxiliary Point on the Photo (AP) in R-Photo. A two-dimensional angle in the X-Y plane of R-Photo, from the –X-axis to a line from PP to AP is calculated, as shown in figure 9. The –X-axis is used as the origin of the angle since it represents the top of the photo once it passes through the perspective centre. The difference between the computed angle and the angle measured by the user on the photo resolves the ambiguity in the rotation of the photo relative to the principal line (figures 7 and 9). The transformations from R-Spacecraft and the R-Photo are then modified to include an additional rotation angle about the +Z-axis in R-Photo.

5.3.5 "Footprint" Calculations
     The program next computes the coordinates of eight points about the perimeter of the image format (i.e. located at the four photo corners, plus a bisector point along each edge of the image) . These points are identified in R-Photo based upon the photograph format size and then converted to R-Spacecraft. Since all computations are done in orthogonal coordinate systems, the R-Spacecraft to R-Photo rotation matrix is transposed to produce an R-Photo to R-Spacecraft rotation matrix. Once in R-Spacecraft, a unit vector from each of the eight perimeter points, through the photo perspective centre (the same point as SC), is computed. This provides the coordinates for points about the perimeter of the image format, with their direction vectors in a common coordinate system with other key points needed to compute the photo footprint.

     The next step is to compute the point of intersection between the spherical earth model and each of the eight perimeter point vectors. The scalar value for each perimeter point unit vector is computed using two-dimensional planar trigonometry. An angle g is computed using the formula for the cosine of an angle between two included vectors (the perimeter point unit vector and the vector from SC to the centre of the Earth). Angle y is computed using the Law of Sines. Angle e = 180 degrees - g -y. The scalar value of the perimeter point vector is computed using e and the Law of Cosines. The scalar value is then multiplied by the perimeter point unit vector to produce the three-dimensional point of intersection of the vector with Earth's surface in R-Spacecraft. The process is repeated independently for each of the eight perimeter point vectors. Aside from its mathematical simplicity, the value of arcsine (computed in step 2) will exceed the normal range for the sin of an angle when the perimeter point vectors fail to intersect with the surface of the earth. A simple test, based on this principle, allows the program to correctly handle oblique photos which image a portion of the horizon (see results for high oblique photographs in Table 5).

     The final step in the process converts the eight earth intersection points from the R-Spacecraft to R-Earth. The results are then converted to the standard geographic coordinate system and displayed on the "Input-Output" page of the spreadsheet.

5.4. Examples
     We applied all three formulations to the photographs included in this paper and the results are compared in table 5. Formulation 1 gives too small of a value for distance across the photograph (D) for all but the most nadir shots, and thus serves as an indicator of the best theoretical case, but is not a good measure for a specific photograph. For example, the photograph of Lake Eyre taken from 276 km altitude (figure 2, A) and the photograph of Limmen Bight (figure 7), were closest to being nadir views (offsets < 68 km or t < 15°, table 5). For these photographs, D calculated using Formulation 1 was similar to the minimum D calculated using Formulations 2 and 3. For almost all other more oblique photographs, Formulation 1 gave a significant underestimate of the distance covered in the photograph. For figure 5, A (the picture of Houston taken with a 40 mm lens), Formulation 1 did not give an underestimate for D. This is because Formulation 1 does not account for curvature of the Earth in any way. With this large field of view, assuming a flat Earth inflated the value of D above the minimum from calculations that included Earth curvature.

Table 5     A major difference between Formulations 2 and 3 is the ability to estimate pixel sizes (P) in both directions (along the principal line and perpendicular to the principal line). For the more oblique photographs, the vertical estimate of D and pixel sizes is much larger than in the horizontal direction (e.g. the low oblique and high oblique photographs of Hawaii, figure 4, table 5).

     For the area of Limmen Bight (figure 7), table 5 illustrates the improvement in the estimate of distance and pixel size that can be obtained by re-estimating the location of the PC with greater accuracy. Centre points in the catalogued data are ± 0.5° of latitude and longitude. When the centre point was re-estimated to ± 0.02°, we determined that the photograph was not taken as obliquely as first thought (change in the estimate of the look angle t from 16.9 to 12.8°, table 5). When the auxiliary point was added to the calculations of Formula 3, the calculated look angle shrank further to 11.0°, indicating that this photograph was taken at very close to a nadir view. Of course, this improvement in accuracy could have also led to estimates of greater obliquity, and corresponding larger pixel sizes.

     We also tested the performance of the scale calculator with auxiliary point by estimating the corner and point locations on the photograph using a 1:1,000,000 Operational Navigational Chart. For this test, we estimated our ability to read coordinates from the map as ± 0.02°and our error in finding the locations of the corner points as ± 0.15° (this error varies among photographs depending on the detail that can be matched between photo and map). For Limmen Bight (figure 7), the mean difference between map estimates and calculator estimates for 4 points was 0.31° (SD = 0.18, n = 8). For a photograph of San Francisco Bay (STS062-151-291) the mean difference between map estimates and calculator estimates for 4 points was 0.064° (SD = 0.18, n = 8). For a photograph of San Francisco Bay (STS062-151-291) the mean difference between map estimates and calculator estimates for 8 points was 0.196° (SD = .146, n = 16). Thus in one case, the calculator estimates were better than our estimate of the error in locating corner points on the map. It is reasonable to expect that for nadir-viewing photographs, the calculator used with an auxiliary point can estimate locations of the edges of a photograph to within ± 0.3°.

5.5. Empirical Confirmation of Spatial Resolution Estimates
     As stated previously, a challenge to estimating system-AWAR for astronaut photography of Earth is the lack of suitable targets. Small features in an image can sometimes be used as a check on the size of objects that can be successfully resolved, giving an approximate value for GRD. Similarly, the number of pixels that make up those features in the digitised image can be used to make an independent calculation of pixel size. We have successfully used features such as roads and airport runways to make estimates of spatial scale and resolution (e.g. Robinson et al. 2000c). While recognizing that the use of linear detail in an image is a poor approximation to a bar target, and that linear objects smaller than the resolving power can often be detected (Charman 1965), we could find few objects other than roads to make any direct estimates of GRD. Thus, we used roads and runways in the images of Houston (where we can readily conduct ground verifications, and where a number of higher-contrast concrete roadways were available), to obtain empirical estimates of GRD and pixel size for comparison with table 5.

     In the all-digital ESC image of Houston (figure 3, D) we examined Ellington Field runway 4-22 (centre left of the image) which is 2438.4 ´ 45.7 m. This runway is approximately 6-7 pixels in width and 304-309.4 pixels in length, so pixels represent an distance on the ground 7 – 8 m. Using a lower contrast measure of a street length between two intersections (212.5 m = 21.1 pixels), we estimate a pixel width of 10.1 m. These results compare favourably with the minimum estimate of 8.1 m pixels using Formulation 1 (table 5). For an estimate of GRD that would be more comparable to aerial photography of a line target, the smallest street without tree cover that we could clearly distinguish on the photograph was 7.92 m wide. The smallest non-street object (a gap between stages of the Saturn rocket on display in a park at Johnson Space Center) we could clearly distinguish on the photograph was 8.53 m wide.

     For the photograph of Houston taken with 250-mm lens (figure 3, C), and digitised from 2nd generation film at 2400 ppi (10.6 mm/pixel), Ellington Field runway 4-22 is 3 pixels in width and 161.3 pixels in length, so pixels represent 15.1 – 15.2 m on the ground. These results compare favourably with the minimum estimate of 15.2 m using Formulation 2, and 15.4 - 18.5 m pixels using Formulation 3 (table 5). For an estimate of GRD using an 8 ´ 8-inch print (1:3.69 enlargement) and 4 ´ magnification, the smallest street we could clearly distinguish was 8.22 m wide, the same feature could barely be distinguished on the digitised image.

Figure 10     We also made an empirical estimate of spatial resolution for lower contrast vegetation boundaries. By clearing forest so that a pattern would be visible to landing aircraft, a landowner outside Austin, Texas (see also aerial photo in Lisheron 2000), created a target that is also useful for evaluating spatial resolution of astronaut photographs. The forest was selectively cleared in order to spell the landowner's name 'LUECKE' with the remaining trees (figure 10). According to local surveyors who planned the clearing, the plan was to create letters that were 3100 ´ 1700 ft (944.9 ´ 518.2 m). Photographed at a high altitude relative to most Shuttle missions (543 km) with a 250-mm lens, Formula 3 predicts that each pixel would represent an area 28.6 ´ 36.0 m on the ground (table 5). When original film was digitised at 2400 ppi (10.6 mm/pixel), letters correspond to 29.4 ´ 18.8 pixels for a comparable pixel size of 27 – 32 m.

<< Back            Next >>