FIELD OF THE INVENTION
The present invention relates to the field of holographic data storage system. In particular, the present invention relates to a method and system for equalizing holographic data pages.
BACKGROUND OF THE INVENTION
Holographic data storage systems store information or data based on the concept of a signal beam interfering with a reference beam at a holographic storage medium. The interference of the signal beam and the reference beam creates a holographic representation, i.e., a hologram, of data elements as a pattern of varying refractive index and/or absorption imprinted in a volume of a storage or recording medium such as a photopolymer or photorefractive crystal. Combining a dataencoded signal beam, referred to as an object beam, with a reference beam creates the interference pattern at the storage medium. A spatial light modulator (SLM), for example, can create the dataencoded signal beam. The interference pattern induces material alterations in the storage medium that generate the hologram. The formation of the hologram in the storage medium is a function of the relative amplitudes and polarization states of, and phase differences between, the signal beam and the reference beam. The hologram is also dependent on the wavelengths and angles at which the signal beam and the reference beam are projected into the storage medium. After a hologram is created in the storage medium, projecting the reference beam into the storage medium reconstructs the original dataencoded signal beam. The reconstructed signal beam may be detected by using a detector, such as CMOS photodetector array or the like. The detected data may then be decoded into the original encoded data.
In a pageoriented holographic data storage device, it is advantageous to minimize the size of the holograms in order to achieve maximum storage density. One method of accomplishing this is minimizing the size of the page imaging aperture. However, minimizing the size of the aperture has the consequence of increasing blur, in terms of broadening the pixel spread function (PSF) in the page images. This blur decreases the signaltonoise ratio (SNR) of the holographic storage device, which increases the bit error rate (BER) of the system, and which in turn limits the storage density.
Since blur in an image is a deterministic process, much of the SNR loss may be reclaimed by digitally postprocessing the detected page image. Traditionally, the detected image is convolved with a small kernel matrix w, also known as a kernel, representing an inverse blurring operation (deconvolution), thereby implementing a finite impulse response (FIR) filter equalization.
The kernel of a FIR filter, for example a 3×3 or a 5×5 matrix, may be determined by several methods known in the current art. For example, if the page image pixel spread function is known, a zeroforcing equalizer may be designed by calculating the linear inverse of the PSF. An example of the zeroforcing method is described in “Channel estimation and intrapage equalization for digital volume holographic data storage,” by V. Vadde and B. Kumar in Optical Data Storage 1997, pp. 250255, 1997. Another approach is to choose FIR filter coefficients that minimize the difference between the equalized data page image and the original data page. Such a method is described in “Application of linear minimum meansquarederror equalization for volume holographic data storage,” by M. Keskinoz and B. Kumar in Applied Optics, vol. 38, no. 20, Jul. 10, 1999.
Performance of FIR equalization as shown in the prior art is limited in at least two aspects. First, blur in a coherent imaging system is not a linear process. Although coherent light adds linearly in electric field strength, detectors can only directly detect irradiance. This introduces a nonlinear absolute value squared transformation. Furthermore, each detector element (pixel) integrates the irradiance over an area, introducing a further nonlinearity. Prior art has disclosed ways to solve this problem either through a “magnitude model” (operating on the square root of the detected values, but lacking phase information), or through an “intensity model” (operating on the PSF and the pixel fill factors). An example of both the “magnitude model” and the “intensity model” is described in “Channel modeling and estimation for intrapage equalization in pixelmatched volume holographic data storage,” by V. Vadde and B. Kumar in Applied Optics, vol. 38, no. 20, Jul. 10, 1999.
Second, the performance of FIR equalization described by the prior art is limited because real imaging systems are not perfect shift invariant linear systems. In other words, the pixel spread function is not constant at all locations in the field of view. There are a number of factors that create variations in the width or shape of the PSF throughout the field of view. For example, variations may be caused by lens aberrations and misalignment; by distortions, shrinkage, and other nonideal media responses; and by misalignment and wavefront errors in the reconstructing reference beam. A significant consequence of these effects in a pixelmatched system is the degradation of the pixel matching, because image distortion shifts local areas of the image with respect to the detector pixels. For example, a uniform shrinkage of the medium causes the holographic image to be magnified, producing a radial displacement such that data pixel images are no longer centered on their respective detector pixels.
Therefore, new methods and systems for addressing the issues of the prior art methods are needed. In particular, methods and systems for equalizing holographic image data are needed to improve the storage density of the holographic data storage system. Further, methods and systems for compensating nonlinearity of the holographic data storage channel are also needed to improve the storage density of the holographic data storage system.
SUMMARY
A method for equalizing a holographic image page includes receiving the holographic image page and dividing the holographic image page into a plurality of local image regions. The method further includes generating a local alignment error vector for each local image region, computing a local finite impulse response kernel for each local image region according to the corresponding local alignment error vector, and adjusting misaligned pixels of each local image region using the corresponding local finite impulse response kernel.
In another embodiment, a method for compensating nonlinearity of a holographic data storage channel includes selecting a metric for measuring data accuracy of a holographic image page and computing a set of values of the metric over a predetermined set of linearization exponents. The method further includes selecting a desired linearization exponent for generating a desired value of the metric that corresponds to a desired data accuracy of the holographic image page, and adjusting the nonlinearity of the holographic data storage channel in accordance with the desired linearization exponent.
In yet another embodiment, a method for equalizing a holographic image page includes receiving the holographic image page and dividing the holographic image page into a plurality of image regions, and deriving an expected blur and an actual blur for each image region. The method further includes computing a pixelsignalerrorratio between the actual blur and the expected blur for each image region, computing a local finite impulse response kernel in accordance with the pixelsignalerrorratio and a predetermined global final impulse response, and adjusting misaligned pixels of each local image region using the corresponding local finite impulse response kernel.
BRIEF DESCRIPTION OF THE DRAWINGS
The aforementioned features and advantages of the invention as well as additional features and advantages thereof will be more clearly understood hereinafter as a result of a detailed description of embodiments of the invention when taken in conjunction with the following drawings.
FIG. 1 illustrates a holographic data storage system according to an embodiment of the present invention.
FIG. 2A illustrates an exemplary 21 pixel×21 pixel image data generated by the spatial light modulator.
FIG. 2B illustrates the 21 pixel×21 pixel image data of FIG. 2A detected at the output of the holographic data storage system without being processed by the inventive techniques.
FIG. 2C illustrates the 21 pixel×21 pixel image data of FIG. 2A after being processed according to an embodiment of the present invention.
FIG. 3A is a histogram of the unprocessed pixel image data according to an embodiment of the present invention.
FIG. 3B is a histogram of the processed pixel image data after being processed according to an embodiment of the present invention.
FIG. 4A illustrates a portion of a 3 pixel×3 pixel of an exemplary data image.
FIG. 4B illustrates an intensity profile resulting from adding and squaring the electric field strengths of pixels in the first row of FIG. 4A according to an embodiment of the present invention.
FIG. 5 illustrates a method for selecting a linearization exponent according to an embodiment of the present invention.
FIG. 6 compares the signaltonoise ratio versus imaging aperture according to the various equalization schemes discussed.
Like numbers are used throughout the figures.
DESCRIPTION OF EMBODIMENTS
Methods and systems are provided for equalizing holographic image pages and for compensating nonlinearity of a holographic data storage system. The following description is presented to enable any person skilled in the art to make and use the invention. Descriptions of specific techniques and applications are provided only as examples. Various modifications to the examples described herein will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the invention. Thus, the present invention is not intended to be limited to the examples described and shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In a holographic data storage system, the detector (camera) is typically aligned with the holographic image during a read operation such that each pixel in the image is centered on a single pixel on the detector. This alignment process is generally referred to as “pixel matching.” The objective of “pixel matching” is to sample the holographic images of datacontaining holograms on the detector in a manner that preserves the information content so that the detected image that have a low bit error rate (BER). Pixel misalignment may occur because one or more components of the holographic data system may be translated or rotated causing translational, tilted, rotational, magnification, or defocusing errors in the detected image. Misalignment, unless otherwise indicated, may refer to one or more of translational, tilt, rotational, magnification, or defocusing errors of the detected image.
Equalizing a Holographic Image Page
FIG. 1 illustrates a holographic data storage system according to an embodiment of the present invention. The holographic data storage system includes a light source 110, a first lens 112, a beam splitter 114, a SLM 116, a first microcontroller 117. The holographic data storage system further includes a first mirror 118, a second lens 120, a storage medium 124, a third lens 126, a detector 128, a second microcontroller 129, a second mirror 130, a microprocessor 136 and a memory 138. The memory 138 comprises an operating system 140, an application layer 141, an equalization module 142 and a linearization module 143.
In one embodiment, the light source 110 is a laser for providing a coherent beam of light. The beam splitter 114 is positioned to split the laser beam into an object beam and a reference beam. The object beam is directed to the SLM 116 where it is encoded, for example, by an encoding unit within the first microcontroller 117. The object beam is encoded with data associated with a data page that creates a twodimensional image signal. The signal beam, modulated with the data page image, is then directed to the recording storage medium 124 via the first mirror 118.
The first microcontroller 117 may include software and/or hardware capable of encoding data sequences into pixel values by appropriately addressing the array of addressable elements of the SLM 116. The first microcontroller 117 may also encode various registration marks or known pixel patterns for determining misalignments, i.e., rotation, translation, and the like of the SLM 116, storage medium 124, or detector 128. For example, the first microcontroller 117 may include an encoder and/or a decoder, or the like, and may address the SLM 116 and detector 128 through firmware commands or the like.
The microprocessor 136 communicates (as indicated by the double arrow) with the first microcontroller 117 as well as the memory 138 and other components of the system.
The memory 138 may include highspeed random access memory and may include nonvolatile memory, such as a flash RAM. The memory 138 may also include mass storage that is remotely located from the microprocessor 136. The memory 138 preferably stores:

 an operating system 140 that includes procedures for handling various basic system services and for performing hardware dependent tasks; and
 an application layer 141 for interfacing between the operating system and other applications of the holographic data storage system.
The microprocessor 136 further communicates with an equalization module 142 and a linearization module 143 of the holographic data storage system, where

 the equalization module 142 reduces the variations of signal intensity for both the ON and OFF pixels; and
 the linearization module 143 compensates the channel nonlinearity of the holographic data storage system.
The equalization module 142 and the linearization module 143 may include executable procedures, submodules, tables and other data structures. In other embodiments, additional or different modules and data structures may be used, and some of the modules and/or data structures listed above may not be used. The equalization module 142 and the linearization module 143 may be implemented in software and/or in hardware. When implementing in hardware, the equalization module 142 and the linearization module 143 may be implemented in application specific integrated circuits (ASICs) or in field programmable gate arrays (FPGAs).
The holographic data storage system of FIG. 1 may also include microactuators (not shown) configured to move at least one of the SLM 116, detector 128, and recording medium 124. According to one example, microactuators may be controlled, for example, by the first microcontroller 117 or the second microcontroller 129 through microprocessor 136. Microprocessor 136 may receive signals from detector 128 and use a servo feedback loop or the like to move at least one of the SLM 116, detector 128, or recording medium 124 to increase the performance of the holographic storage device. For example, an error signal associated with a misalignment may be sent to the first microcontroller 117 or the second microcontroller 129 (or a microcontroller controlling the position of storage medium 124) to activate one or more microactuators.
Generally, alignment of holographic components is set at the time of manufacturing. Over time, however, the components may become misaligned due to vibrations, shocks, temperature changes, medium shrinkage, and the like. The spatial extent over which stored holograms have useable signaltonoise ratio (SNR) may be approximately only a few microns or less. Therefore, even slight movement of the hologram based on movements of the SLM, detector, or storage medium due to mechanical error, vibration, temperature change, medium shrinkage, and the like often degrades the performance of the holographic system.
In the United. States patent application Ser. No. 10/305,769, entitled “MicroPositioning Movement of Holographic Data Storage System Components”, filed on Nov. 27, 2002 and commonly owned by InPhase Technologies, Inc., a method is disclosed for measuring page misalignment at local regions of the image by performing a crosscorrelation between a part of the image and a known portion of the data page. The entire content of the 10/305,769 application is incorporated herein by reference. The 10/305,769 application further discloses how the method for measuring page misalignment may be applied at a plurality of sample positions within an image in order to generate a map of pixel misalignment over the whole image. The misalignment at each sample position is a vector with two alignment error components, e=(Δx, Δy), representing the x and y misalignment, respectively, measured in pixels. Given this measured misalignment information about an image, the disclosed method utilizes the alignment error vector in improving the performance of FIR equalization.
In one embodiment, an image page is divided into n local image regions, each of which is characterized by a local alignment error vector e_{i }(i=1 . . . n). Then, each local image region is equalized with a local FIR kernel, w_{i}, which is a modified version of the global FIR kernel, w. In other words, the method determines the magnitude and direction of the local pixel alignment error for each local image region, and then compensates the local FIR kernel w_{i }accordingly in order to remove the local pixel alignment error. In particular, w_{i }is formed by shifting w in the opposite direction of e_{i }so as to reverse the effects of the local pixel alignment error. For example, in the case where w is a 3by3 matrix, w is of the form
$w=\left[\begin{array}{ccc}{w}_{11}& {w}_{12}& {w}_{13}\\ {w}_{21}& {w}_{22}& {w}_{23}\\ {w}_{31}& {w}_{32}& {w}_{33}\end{array}\right].$
One approach to obtain the global w matrix is by applying the linear minimum meansquarederror (LMMSE) method over an entire known image page. Assuming 0≦Δx_{i}≦1 and 0≦Δy_{i}≦1 (i.e., the local pixel alignment error is positive and less than one pixel), then the local w_{i }is computed according to the following equation:
$\begin{array}{cc}\begin{array}{c}{w}_{i}=\ue89e\left(1\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{x}_{i}\right)\ue89e\left(1\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{y}_{i}\right)\ue8a0\left[\begin{array}{ccc}{w}_{11}& {w}_{12}& {w}_{13}\\ {w}_{21}& {w}_{22}& {w}_{23}\\ {w}_{31}& {w}_{32}& {w}_{33}\end{array}\right]+\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{x}_{i}\ue8a0\left(1\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{y}_{i}\right)\\ \ue89e\left[\begin{array}{ccc}0& {w}_{11}& {w}_{12}\\ 0& {w}_{21}& {w}_{22}\\ 0& {w}_{31}& {w}_{32}\end{array}\right]+\left(1\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{x}_{i}\right)\ue89e\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{y}_{i}\ue8a0\left[\begin{array}{ccc}0& 0& 0\\ {w}_{11}& {w}_{12}& {w}_{13}\\ {w}_{21}& {w}_{22}& {w}_{23}\end{array}\right]+\\ \ue89e\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{x}_{i}\ue89e\Delta \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{y}_{i}\ue8a0\left[\begin{array}{ccc}0& 0& 0\\ 0& {w}_{11}& {w}_{12}\\ 0& {w}_{21}& {w}_{22}\end{array}\right].\end{array}& \left(\mathrm{Eqn}.\phantom{\rule{0.8em}{0.8ex}}\ue89e1\right)\end{array}$
In this example, w_{i }is shifted rightwards and downwards compared to w, implying that the coordinate system has been defined so that positive Δx and Δy correspond to leftwards and upwards, respectively. In other embodiments, w_{i }may be shifted in the other three quadrants (i.e., Δx and/or Δy is negative), or the shifts (Δx and Δy) may be greater than one, and Equation 1 may be rewritten to compensate shifts in other quadrants or changes in ranges of Δx and Δy.
One of the benefits of the technique described above is that instead of determining each w_{i }matrix for each local region from scratch, which typically requires having advanced knowledge of the input image and performing many computations, each w_{i }matrix is determined according to the global w matrix and the local pixel alignment error vector. The local w_{i }matrix is constrained in such way that only the degrees of freedom corresponding to pixel misalignment are allowed to be adjusted. Therefore, the amount of computation required in determining the local w_{i }matrix is significantly reduced. In addition, the technique is less noise prone because it has a more accurate estimate of the w_{i }matrix by using the local pixel alignment error that has two degrees of freedom as opposed to noise contributions from nine degrees of freedom if one were to generate the w_{i }matrix from scratch.
In an alternative embodiment, the magnitude of the shift applied to the local FIR kernel w_{i }may not be exactly equal to the local pixel alignment error (Δx_{i}, Δy_{i}), but rather equal to a scaled version of the local pixel alignment error, (a_{x}Δx_{i}, a_{y}Δy_{i}). The value of the scaling factors a_{x }and a_{y }would be determined by optimizing one or more performance metrics. An exemplary range of the scaling factors is 0<a_{x}<1, and 0<a_{y}<1, i.e., the optimal amount of shift is less than the full alignment error.
In other embodiments, different performance metrics may be employed to determine the scaling factors, including but not limited to signaltonoise ratio, bit error rate or least mean square error. In addition, either experimental data or theoretical (simulated) data may be used to determine the scaling factors. Further, the scaling factors a_{x }and a_{y }include certain amount of noise and therefore it may be preferable not to amplify the noise by using a scale factor value less than one. Furthermore, the raw pixel alignment error e_{i }may be spatially filtered, for instance with a low pass filter, to reject noise received from a prior measurement process, and multiple e_{i }may be averaged to reject noise received from the prior measurement process.
In a further embodiment, the shift operation of Equation 1 may be applied to a predetermined global PSF, h, in the same direction as e_{i }rather than directly to the FIR kernel w. In this case, a local PSF h_{i }is determined for each image region, and the corresponding w_{i }is determined by an inversion operation such as the zeroforcing method. A person skilled in the art will recognize that other formulae for shifting the FIR kernel w may differ from Equation 1 in manners that do not depart from the spirit and scope of the invention.
Generally, as discussed in the background section, the prior art references contemplate generating a local FIR kernel w_{i }from scratch, for example, by using the methods of zeroforcing or LMMSE. These methods have the disadvantage that the local FIR kernel w_{i }derived from sufficiently small local samples of an image tend to be noisy. The method described above addresses this shortcoming by incorporating prior knowledge of the global FIR kernel w, and restricting the degrees of freedom that the local data may affect. The allowed degrees of freedom, such as the x and y pixel alignment error, are expected to have the greatest variation over a real image.
This method may be further generalized by allowing other degrees of freedom in equalizing a holographic image page. In particular, blur is a parameter that may vary locally within an image, or globally over an image. Blur may be incorporated as a locally adjustable parameter according to the present method by deriving a local measurement of the blur, and then altering the local FIR kernel w_{i }in a manner that compensates for variations in the blur compared to the expected blur in the global FIR kernel, w.
In one embodiment, the blur of the local image region is derived using a variation of the pixel misalignment measurement method of the '769 application as described above. During the calculation of the local pixel alignment error vector, a covariance matrix is generated which measures the cross correlation between zeromean versions of a small region of the measured image pattern and the corresponding known image pattern. The covariance matrix is used as an estimate of the local PSF ĥ_{i}. A first pixel signal error factor (PSEF) (a metric that varies inversely with blur) is derived from the local PSF by calculating the ratio of signal landing in the intended pixel location to signal landing in the neighboring pixel locations (i.e., the ratio of light striking the intended pixel to light striking its neighbors). In another embodiment, the neighboring pixel locations may include locations that are one or more pixels away from the intended pixel of interest. Similarly, a second PSEF is derived from a predetermined global PSF. Then, the first PSEF is divided by the second PSEF to form a pixelsignalerror ratio (PSER). The PSER is then used to adjust the local w_{i }so as to compensate for the deviation from nominal blur.
In a specific implementation, only the covariance associated with the central element of the PSF, h_{22}, and its four nearest neighbors (immediately above, below, to the left, and to the right of h_{22}) are computed. The PSER becomes
${\mathrm{err}}_{\mathrm{PSEF}}=\frac{\frac{\uf603{\hat{h}}_{22}\uf604}{\uf603{\hat{h}}_{12}\uf604+\uf603{\hat{h}}_{21}+{\hat{h}}_{23}\uf604+\uf603{\hat{h}}_{32}\uf604}}{\frac{\uf603{h}_{22}\uf604}{\uf603{h}_{12}\uf604+\uf603{h}_{21}+{h}_{23}\uf604+\uf603{h}_{32}\uf604}}.$
The local FIR kernel w_{i }may then be modified by changing only its center element w_{22}:
w′_{22}=err_{PSEF}w_{22},
which adjusts the PSEF of w, by the same proportion as the detected PSEF changes, thus compensating for the local blur of the local image region.
In an alternative embodiment, the ratio of PSEF is used to equalize the local PSF, h_{i }directly (i.e., h′_{22}=err_{PSEF}h_{22}), which is then used to determine w_{i }by zeroforcing, linear minimum meansquarederror or other methods. In yet another alternative embodiment, the applied ratio of PSEF may be rescaled by a factor between the range of 0<ERR_{PSEF}<1, preferentially closer to 1, in order to minimize the effects of measurement noise. A person of ordinary skill in the art will recognize that different metrics for measuring blur and/or different methods for adjusting the local FIR kernel w_{i }or the local PSF h_{i }may be applied without departing from the spirit and scope of the invention.
FIGS. 2A, 2B and 2C illustrate benefits of the locally variable FIR equalization technique according to an embodiment of the present invention. FIG. 2A illustrates an exemplary 21 pixel×21 pixel image data generated by the spatial light modulator. It is a raw image at the input of the holographic data storage system. An ON pixel is represented by a white 1×1 square and an OFF pixel is represented by a black 1×1 square.
FIG. 2B illustrates the 21 pixel×21 pixel image data of FIG. 2A detected at the output of the holographic data storage system without being processed by the inventive techniques. In other words, it is the corresponding image area read at the detector 128 of the holographic data storage system without being processed by the inventive techniques. As shown in FIG. 2B, there are many variations in brightness for the ON pixels, ranging from very bright to grey. For example, there are at least four levels of brightness of the ON pixels, at the coordinates (3, 7), (9, 8), (16, 6) and (16, 3) respectively. Such variations of the ON pixels are difficult to distinguish from OFF pixels. This limitation results in higher bit error rate in the holographic image page, which in turn reduces the storage density of the holographic data storage system.
FIG. 2C illustrates the 21 pixel×21 pixel image data of FIG. 2A after being processed according to an embodiment of the present invention. In other words, the image data read at the detector 128 of the holographic data storage system is processed according to the locally variable FIR equalization technique. Comparing to FIG. 2B, the ON pixels of the image data in FIG. 2C have fewer variations in brightness. The process of imaging through a restricted aperture causes the pixels to be blurred as if by the application of a lowpass spatial filter upon the image field amplitude. The effect is most noticeable in the ON pixels because the varying noise contributions from the neighboring pixels add coherently with the ON signal itself, effectively multiplying the noise in intensity. The technique of channel linearization reverses this process of coherent noise multiplication so that the noise is approximately additive and of equal magnitude for both the ON and OFF pixels. Then, the technique of applying the FIR filter inverts the lowpass blurring process (i.e., the FIR filter is a highpass equalizer) making the ON and OFF pixels levels less dependent on their neighbors. Furthermore, the technique of varying the FIR filter in accordance with local pixel alignment further improves the performance of the equalizer by restoring the pixel alignment (pixel misalignment being manifested as asymmetrical blur). As a result of these operations, the histograms of the ON and OFF pixels are made narrower and more distinguishable.
FIGS. 3A and 3B further illustrate the benefits of the locally variable FIR equalization technique according to an embodiment of the present invention. FIG. 3A is a histogram of the unprocessed pixel image data according to an embodiment of the present invention. The vertical axis represents the number of pixels. The horizontal axis represents the pixel brightness at the detector 128. Without applying equalization, there are many variations in brightness of the ON pixels. The majority of the ON pixels have brightness values ranging from 250 to 500, but some ON pixels have brightness values ranges from 600 to over 1000. In this example, the SNR of the raw image data without applying equalization is about 3.409 dB. One of the problems of large variations in brightness of the ON pixels is that the ON pixels overlap with OFF pixels at the low end. Thus, no single threshold may be used to distinguish the ON pixels from the OFF pixels. This limitation results in a higher bit error rate in the holographic image page, which in turn reduces the storage density of the holographic data storage system.
FIG. 3B is a histogram of the processed pixel image data after being processed according to an embodiment of the present invention. The vertical axis represents the pixel count and the horizontal axis represents pixel brightness after equalization. As shown in FIG. 3B, majority the OFF pixels are centered on pixel brightness value of 1 and majority of the ON pixels are centered on pixel brightness value of 2. Thus, there is a clearer separation of the ON pixels and the OFF pixels. In this example, the SNR of the processed image area has improved to 5.999 dB from that of 3.409 dB as shown in FIG. 3A.
To improve the efficiency of the methods of linearization and equalization described above, special data pages may be used to simplify the procedure in determining the coefficients of the global PSF or the global FIR kernel. For example, a page consisting of many isolated pixels spread in known positions over the page may be used. In this context, an isolated pixel is a single ON pixel surrounded by nearest neighbors that are OFF pixels. Analysis of the recovered known pixel pattern for the spatial dependence of the PSF and linearization exponent is simplified because of reduced influence of adjacent pixels.
In addition, known pixel patterns may be employed as part of a calibration procedure for alignment when the holographic drive is first fabricated, and as part of a recalibration procedure as the holographic drive drifts and ages, or after the holographic drive is subject to mechanical shock and vibration. Using prerecorded known pixel holograms, the holographic drive may recalibrate itself and update its filter parameters as needed.
Compensating Nonlinearity of the Holographic Data Storage Channel
As suggested in the background section above, a significant limiting factor on the performance of FIR filtering in relation to a coherent optical data channel is the inherent nonlinearity of the channel. This is because the FIR filtering is tuned to remove the effects of a pixel misalignment on its neighbors as best as possible within a linear regime. Thus, the nonlinearity of the holographic data storage channel needs to be compensated.
FIGS. 4A and 4B illustrate the nonlinearity of the holographic data storage channel. FIG. 4A illustrates a portion of a 3 pixel×3 pixel of an exemplary data image. Both the horizontal and vertical axes represent the dimensions of a pixel. An ON pixel is represented by a white spot and an OFF pixel is represented by a dark spot. The first row consists of ON/OFF/ON pixels; the second row consists of ON/ON/OFF pixels; and the third row consists of OFF/ON/OFF pixels: The brightness of a pixel represents the aggregated level of electric field strength squared at that pixel location. Additionally for some OFF pixels, the very dark colors represent electric field strength values inverted with respect to the electric field strength values of the ON pixels. As shown in FIG. 4A, the light of an ON pixel extends into its adjacent pixels thereby creating intersymbol interference. Such intersymbol interferences cause the holographic data storage system to be nonlinear.
FIG. 4B illustrates an intensity profile resulting from adding and squaring the electric field strengths of pixels in the first row of FIG. 4A according to an embodiment of the present invention. Note that as an example, only the intersymbol interference effect between pixels in the first row is shown in FIG. 4B. A person of ordinary skill in the art will recognize that the intersymbol interference effect between pixels from adjacent rows may be analyzed in a similar manner. The horizontal axis represents the distance between the pixels; and the vertical axis represents the electrical field strength of the ON pixels. The curve 410 represents the electric field strength profile over distance for the left ON pixel. The curve 412 represents the electric field strength profile over distance for the right ON pixel. The curve 414 represents the light intensity profile obtained by summing and squaring curves 410 and 412. The box 416 represents the size of the left pixel detector; the box 418 represents the size of the center pixel detector; and the box 420 represents the size of the right pixel detector.
Note that the electric field strength profile from the left ON pixel 410 affects the center OFF pixel as well as the right ON pixel 412. Similarly, the electric field strength profile from the right ON pixel 412 affects the center OFF pixel and the left ON pixel 410. As the image aperture is reduced, the electric field strength of an ON pixel blurs out and lands on the neighboring pixels. In addition, the electric field strength may be a negative value. In other words, the intersymbol interference from one neighbor pixel may not only add to the electric field strength of the neighbor pixel, it may also be subtracted from the neighbor pixel. This characteristic presents another problem to the presumption of a linear channel, which assumes the signal levels of neighbor pixels always add together, but do not subtract from each other.
In one embodiment, the raw signal is linearized by applying a linearization exponent (α), to each detected pixel value. The linearization exponent is applied prior to an equalizing operation, such as the FIR filtering, which presumes linearity.
In this approach, the nonlinearity of the channel is measured according to a desired value of a metric corresponding to a desired data accuracy of the holographic data storage system. This approach assumes that the recording data are binary, having states 0 (OFF) and 1 (ON). The approach also assumes a global channel PSF h of finite extent, and that it has vertical and horizontal symmetry. For example, the global PSF h is of the form
$h=\left[\begin{array}{ccc}{h}_{4}& {h}_{3}& {h}_{4}\\ {h}_{2}& {h}_{1}& {h}_{2}\\ {h}_{4}& {h}_{3}& {h}_{4}\end{array}\right].$
Since h is presumed zero for all distances greater than one from the center, the intersymbol interferences (ISI) introduced by h into a given pixel is a function of the pixel and its eight immediate neighbors. If the channel is linear, Equation 2 may be used to represent the measured intensity level of a pixel in terms of the channel response and the 512 (i.e. 2^{9}) possible states of the 3 by 3 neighbor pixels surrounding the central pixel as
$\begin{array}{cc}\left[\begin{array}{c}{I}_{1}\\ {I}_{2}\\ \vdots \\ {I}_{512}\end{array}\right]=X\ue8a0\left[\begin{array}{c}{h}_{1}\\ {h}_{2}\\ {h}_{3}\\ {h}_{4}\end{array}\right].& \left(\mathrm{Eqn}.\phantom{\rule{0.8em}{0.8ex}}\ue89e2\right)\end{array}$
X is a 512 by 4 matrix, where each row contains the count of the respective terms in h that sum into the center pixel, and the signal intensity level vector I is a 512element column vector containing the ideal linear intensity level of the center pixel.
In the case of a nonlinear channel, an actual signal intensity level vector I′ of detector values taken over all 512 possible pixel neighborhood states may be obtained by either analytical or empirical methods. Then, a fitting error of the actual vector I′ versus the ideal linear vector I may be defined as the least squares error (LSE) of signal intensity level as:
${\mathrm{err}}_{\mathrm{LS}}=\frac{\uf603I{I}^{\prime}\uf604}{\uf603I\uf604},$
where err_{LS }is used as a metric for adjusting channel linearity.
Given such a metric, the channel performance may be evaluated over a range of linearization exponent (α), and a desired linearization exponent is selected that minimizes err_{LS}. In one specific implementation, the channel is simulated in the MatLab computer language incorporating the pixel fill factors, linearization exponent (cc), and the continuous imaging system point spread function. Using the leastsquares fitting function LSQNONNEG, the best fitting PSF h may be obtained, as well as the actual channel response I′. The corresponding linear I vector is created by solving Equation 2. Then, the metric err_{LS }is computed by using I and I′.
FIG. 5 illustrates a method for selecting a linearization exponent according to the method described above. The horizontal axis represents the linearization exponent value. The vertical axis represents the least square error err_{LS}, a metric for measuring data accuracy of the holographic data storage channel. In this example, the best linearization exponent value α that generates the lowest err_{LS }is about 0.58, with the corresponding local minimum of err_{LS}=0.043 (27.3 dB). For this exemplary holographic data storage system, when the linearization exponent decreases below 0.58, err_{LS }increases; and when the linearization exponent increases above 0.58, err_{LS }also increases. Note that the linearization exponent (α) may also be tuned to minimize the bit error rate (BER) of a holographic image page using the method described above.
In yet another embodiment, the linearization exponent (α) is optimized subject to a signaltonoise ratio (SNR), as opposed to the leastsquares fitting error of signal intensity level described above. The SNR of the measured values of pixel I′ over the states of its neighboring pixels may be calculated. First, divide I′ into a first set and a second set, where the first set has the center pixel equal to 1 and the second set has the center pixel equal to 0. Second, calculate the means (μ_{1 }and μ_{0}) for the 1's and 0's, respectively. Third, calculate the standard deviations (σ_{1 }and σ_{0}) for the 1's and 0's, respectively. An ISI limited SNR may then be computed by Equation 3 as
$\begin{array}{cc}{\mathrm{SNR}}_{\mathrm{ISI}}=\frac{{\mu}_{1}{\mu}_{0}}{{\sigma}_{1}+{\sigma}_{0}}.& \left(\mathrm{Eqn}.\phantom{\rule{0.8em}{0.8ex}}\ue89e3\right)\end{array}$
This SNR only considers noise that is caused by intersymbol interference and therefore it is ISI limited. However, in other embodiments, other sources of noise, for example coherently additive optical noise or detector response variation, may be incorporated into the noise denominator. In addition, alternate definitions of SNR, for example adding rootmeansquare noise terms, may be used without departing from the spirit and scope of the invention. Further, the SNR may be calculated from formulae similar to Equation 3 over empirical data collected from actual holograms or from a representative optical imaging system. Furthermore, Equation 2 will have a different form if different assumptions about the size, shape, and symmetry of the global PSF are used. Finally, the metric for optimizing the linearization exponent (α) may be derived from the corrected bit error rate at the output or other points in the holographic data storage channel. The bit error rate may be determined analytically or empirically.
FIG. 6 compares the signaltonoise ratio versus imaging aperture according to the various equalization schemes discussed. The figure illustrates improvements of the inventive techniques of equalization and linearization in terms of signaltonoise ratio over the prior art. The horizontal axis represents an area of the imaging aperture, which determines the PSF of the system. The vertical axis represents the signaltonoise ratio of the data pages. Curve 602 shows the signaltonoise ratio of the output image without any equalization or linearization. Curve 604 shows the signaltonoise ratio achieved by applying a linearization exponent α=0.58. Curve 606 shows the effect of filtering a linearized page with a global w kernel determined from the measured PSF by the priorart method of zeroforcing equalizer. Curve 608 shows the improvement obtained from taking the same global w kernel in curve 606 and modifying it locally in each local image region according to the corresponding measured pixel alignment error of each local image region. Finally, curve 610 shows the improvement obtained from a second iteration by applying the locally variable FIR equalization technique again to the curve 608. As shown in FIG. 6, the inventive methods of linearization and equalization achieve a higher SNR compared to the prior art methods.
The leftmost set of data points shows the effects of each scheme at the Nyquist aperture, which is the smallest aperture (and therefore the highest storage density) that adequately samples the information of the data page according to the Nyquist criterion. This configuration also has the widest PSF, and therefore has the most to gain by the method of ISI removal. Thus, the method enables a user to optimize the storage density of the holographic data storage system by choosing a smaller imaging aperture to meet a specific SNR design criteria.
In a different embodiment, a joint iteration method is applied to establish a set of parameters that optimize the performance of the holographic data storage system. In this approach, the set of parameters are iteratively optimized by empirically evaluating their impact on a performance metric such as SNR, err_{s}, or bit error rate. In one specific example of the joint iteration method, the set of parameters α, a_{x}, a_{y}, w_{11}, and w_{12 }are tuned iteratively to improve the performance of the holographic image storage system. Initially, the set of parameters are initialized with certain predetermined values, for example α=a_{x}=a_{y}=1, and w_{11}=w_{12}=−1/10. One or more holographic data images are linearized, equalized, and decoded according to the methods discussed above. (Note that the entire FIR kernel, w, is generated from w_{11 }and w_{12 }by assuming w_{22 }is a one, and that the 3×3 matrix has diagonal and rectangular symmetry.) Next, the decoded image is compared to the original image, and an actual, initial bit error rate is established. Then, an incremental change is made to the first parameter, α (say, incrementing it by 1/10), and the bit error rate is reevaluated. If the new bit error rate is lower (better) than the initial bit error rate, then new value of α is kept for subsequent iterations; otherwise the old value of α is kept.
This process is repeated with each of the other parameters (a_{x}, a_{y}, w_{11}, and w_{12}) to complete the first iteration. Then, the whole process is repeated in subsequent iterations, with each parameter being adjusted (incremented or decremented) by from its last, best known value at each iteration. The direction of the adjustment for each parameter may be alternated, and the magnitude may be shrunk whenever an iteration fails to produce an improvement (e.g., if decrementing a_{x }by 1/10 fails to improve the bit error rate, then it will be incremented by 9/100 on the next iteration). The set of parameters (α, a_{x}, a_{y}, w_{11}, and w_{12}) may converge to values that jointly produce a local minimum value for bit error rate after a number of iterations. When each iteration produces only a negligible change in the overall bit error rate, the procedure is deemed to have converged and the final values of the free parameters are recorded.
Note that the joint iterative method described above is independent of how each individual parameter is obtained. For example, the zeroforcing method may be employed to determine the FIR coefficients, w, from a given pixel spread function, h; or the LMMSE method may be employed to determine w directly from instantiations of the (unequalized) channel response.
One skilled in the relevant art will recognize that many possible modifications of the disclosed embodiments may be used, while still employing the same basic underlying mechanisms and methodologies. The foregoing description, for purpose of explanation, has been described with references to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described to explain the principles of the invention and its practical applications, and to enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.