If 'Calibrated Image' is left as '<No View Selected>', then all open monochrome images will be deinterlaced. Alternatively, use the selection box to choose an image to deinterlace. Two deinterlace algorithms are provided: - 'Interpolate alternate rows' This is how most deinterlace routines work. The data in every other row is thrown away and replaced by the average pixel value from the row above and below.
- 'Scale rows' This calculates the intensity difference between the odd and even rows, and scales them to match. This has the advantage that data is not thrown away. For best results, the image(s) should have been dark or bias corrected before using this option. But don't apply a flat - unless the flat exposure is the same as the image's, the flat will make the venetian blind banding worse.
'Max scale factor' is used by the 'Scale rows' option. The algorithm will ignore row intensity differences greater than this factor. The default value of 10.0 should work well. Lodestar image of the RemoveDebris satellite before applying DeInterlace (viewed at 2:1 scale) How does the 'Scale rows' algorithm work?With an interlaced sensor, half the rows are read first and downloaded, and only then are the remaining rows read. The odd and even rows have therefore been exposed to the light source for different lengths of time. The resulting brightness difference cannot be corrected by simply applying an offset. The required offset would depend on the brightness of the object. A pixel will collect photons from a bright object at a higher rate than a dim object. The correct method is to scale the brightness. For example, if the even rows were exposed for 20% longer, we need to scale the odd rows by 20% to compensate. But for this scaling to work correctly, the image's bias level must be removed. If we don't remove the bias level first, the scaling factor would be inaccurately calculated, and would not apply correctly. Think of a straight line graph - we need to first subtract a constant so that the line goes through the origin. This is why it is important to dark correct the image before using this program. A bias corrected image will also work. My algorithm makes several assumptions: - The sensor is perfectly linear. This is a reasonable assumption for the first 2/3 of the sensors dynamic range. Very bright areas such as star cores are likely to be over corrected, but this will probably not be noticeable.
- The background level is used to calculate the scale factor. Background sky glow or nebula work equally well. However, if the observing site is very dark and the image does not contain extended objects (e.g. nebula) it may not be possible to accurately calculate this scale factor.
The scale factor for each row is calculated by comparing it to the rows immediately above and below it. The background (row median value) is used to ensure that a bright point sources (e.g. stars or hot pixels) on a row do not skew the calculation. To see the algorithm in more detail, look at the javascript source code for the file DeinterlaceMath.js How does the '"Interpolate alternate rows' algorithm work?For the ARGUS project, we needed accurate start and end times. In an interlaced image, the end time is more accurately known for the rows that are read first. The rows that have been exposed for longer are discarded and then interpolated from the rows immediately above and below them. To determine if it was the odd or even rows that were read first, this algorithm first finds the median value for all rows. It then compares the average median value of all odd rows with the median value of all even rows. The rows with the higher median value are the rows that were read last. |