Compact 3D depth Capture Systems
Cross-Reference to Related Applications
This application claims priority to and claims the benefit of U.S. Application Serial No.
14/226,515 filed March 26, 2014, which is hereby incorporated by reference in its entirety.
Technical Field
[01] The disclosure is related to 3D depth capture systems, especially those suited for integration into mobile electronic devices.
Background
[02] Three-dimensional (3D) depth capture systems extend conventional photography to a third dimension. While 2D images obtained from a conventional camera indicate color and brightness at each (x, y) pixel, 3D point clouds obtained from a 3D depth sensor indicate distance (z) to an object surface at each (x, y) pixel. Thus, a 3D sensor provides measurements of the third spatial dimension, z.
[03] 3D systems obtain depth information directly rather than relying on perspective, relative size, occlusion, texture, parallax and other cues to sense depth. Direct (x, y, z) data is particularly useful for computer interpretation of image data. Measured 3D coordinates of an object may be sent to a 3D printer to create a copy of the object, for example. Measured 3D coordinates of a human face may improve the accuracy of computer facial recognition algorithms and reduce errors due to changes in lighting.
[04] Many techniques exist for 3D depth capture, but two of the most successful so far are time of flight and structured light approaches. Time of flight is based on measuring the round trip time for light to travel from a 3D depth capture system to an object and back. The farther away the object is, the longer the round trip time. Structured light is based on projecting a light pattern onto an object and observing the pattern from a vantage point separated from the
projector. For example a pattern of parallel stripes projected onto a face appears distorted when viewed from a position away from the projector.
[05] Current 3D depth capture systems are not small enough to be integrated into mobile electronic devices such as cell phones and tablet computers. Some systems have been packaged into centimeter scale enclosures that can be strapped onto tablets. For 3D depth capture to become a viable addition to mobile devices' sensor suites, however, miniaturization to the millimeter scale is needed.
Brief Description of the Drawings
[06] Fig. 1A shows a mobile electronic device equipped with an integrated 3D depth capture system.
[07] Fig. IB shows the device of Fig. 1A in a face recognition scenario.
[08] Fig. 2 shows a compact 3D depth capture system integrated into a pair of eyeglasses.
[09] Fig. 3 shows a compact 3D depth capture system integrated into a door frame.
[10] Fig. 4 is a high-level block diagram for a compact 3D depth capture system integrated into a mobile electronic device.
[11] Fig. 5 is a conceptual block diagram of compact 3D depth capture system components.
[12] Fig. 6 illustrates a 3D depth capture projector contained in a small volume package suitable for integration into mobile electronic devices.
[13] Fig 7 illustrates optical principles for a 3D depth capture system projector.
[14] Fig. 8 is a conceptual block diagram of electronic and optical signals in a compact 3D depth capture system.
[15] Figs. 9A and 9B illustrate memory addressing strategies for generating digital ribbon data signals.
[16] Fig. 10 illustrates a MEMS ribbon wiring diagram.
[17] Fig. 11 is a graph illustrating power consumption during system burst mode operation.
[18] Fig. 12 is a diagram illustrating the relationship between spatial pattern phase and depth.
[19] Fig. 13 is a graph illustrating phase measurements at two pixels in a camera. [20] Fig. 14 illustrates depth data update cycles.
[21] Fig. 15 shows an example of compact 3D depth capture system output data.
Detailed Description
[22] Compact 3D depth capture systems described below are designed to be integrated into smart phones and other mobile devices. Miniaturization needed for mobile applications is based on new structured light subsystems including optical pattern projection and detection techniques, and system integration concepts. Compact 3D depth capture systems are based on linear-array MEMS-ribbon optical pattern projectors. The systems estimate depth at each pixel independently, with relatively simple computations.
[23] Fig. 1A shows a mobile electronic device 105 equipped with an integrated 3D depth capture system. The system includes 3D system projector 110, 3D system camera 115, and 3D system driver/interface 120. The driver/interface provides 3D capabilities to the mobile device's application processor 125. While the projector is a dedicated component of the 3D system, the camera may be used for conventional photo and video as well.
[24] Fig. IB shows the device of Fig. 1A in a face recognition scenario. When a user aims the device at his or her face 130 the depth capture system obtains set of 3D (x, y, z) points representing the distance from the device to surface contours of the face. This data may be used as an aid in a facial recognition algorithm for biometric identification, for example.
[25] A smart phone, tablet or similar mobile device may be equipped with 3D system projectors and cameras on its front, back or even side surfaces. These sensors may be optimized for different purposes. A front-facing 3D system may be used to help a phone
recognize its owner at arm-length distances while a rear-facing 3D system may provide data for indoor navigation or situational awareness applications, as examples.
[26] Compact 3D depth capture system projectors and cameras are small enough to be embedded in other kinds of personal accessories. Fig. 2 shows a compact 3D depth capture system integrated into a pair of eyeglasses 205. In the example of Fig. 2, 3D system projector 210 and 3D system camera 215 are placed in opposite corners of horn-rimmed glasses. These components may communicate with processors in the temples of the glasses to perform object measurements and recognition activities such as identifying the brand or size of pair of shoes 220.
[27] 3D depth capture systems that are small enough and inexpensive enough to integrate into mobile electronic devices are also suitable in many other situations. Fig. 3, for example, shows a compact 3D depth capture system integrated into a door frame 305. Here 3D system projector 310 and 3D system camera 315 are located at the top corners of the door frame, but many other mounting options are suitable. A door frame equipped with 3D depth capture technology can identify people, animals (such as dog 320), or other objects nearby and provide access as appropriate. Similarly, 3D depth capture systems may be integrated into automobile dashboards to monitor vehicle occupants. An automatic system may apply brakes if it detects a driver nodding off to sleep, as an example.
[28] 3D depth capture systems, whether used in mobile device, personal accessory, fixed installation, automotive, or other applications, share a common system architecture illustrated in Fig. 4 which is a high-level block diagram for such systems. In Fig. 4, application processor 405 communicates with 3D system driver/interface 410. The driver interface communicates with 3D system projector 415 and 3D system camera 420. Application processor 405 may be any processor or graphics processing unit. Examples include a main application processor in a smart phone or a processing unit in an automobile. Description and examples of the driver/interface, projector and camera are found below.
[29] Fig. 5 is a conceptual block diagram of compact 3D depth capture system operation and components. In Fig. 5, 3D system projector 510 and 3D system camera 515 work with
driver/interface 520 to obtain 3D data representing object 500. Projector 510 projects a two- dimensional pattern 505 having spatial variation in one dimension onto the object. Camera 515 observes the pattern from a vantage point separated from the projector by a baseline distance. (The baseline is perpendicular to the direction of spatial variation in the pattern.) The pattern 525 recorded by the camera appears distorted by the surface features of the object.
[30] Each point on the object appears to the camera to be illuminated with light having a sinusoidal intensity variation with time. The camera shares a common time reference with the projector and each operates in a continuous, cyclic mode. The camera frequency (camera cycles per second) is an integer greater than two (i.e. 3, 4, 5 ...) multiple of the projector temporal frequency. At each pixel, the camera samples the projector temporal intensity modulation 3, 4, or 5, etc. times during each projector cycle. These measurements allow the camera to determine the temporal phase of the intensity modulation at each pixel. The phase measurements are then used to estimate depth using structured light triangulation techniques.
[31] Details of driver/interface 520 discussed in more detail below include: a pulse density modulator that drives MEMS ribbons in a linear-array spatial light modulator in the 3D system projector; memory addressing techniques that enable quick reconfiguration of the spatial period of projected patterns; synchronous detection of patterns with a rolling shutter camera; low-voltage drive schemes for MEMS ribbons; and pseudo-bipolar operation of MEMS ribbons.
[32] Projector driver/interface and projector optical components including a laser light source are illustrated in Fig. 6. In an embodiment, these components are packaged in a volume 605 that measures 3 mm x 6 mm x 10 mm, small enough to be integrated into a smart phone, tablet or other device. Components within this volume include a laser light source, a linear- array MEMS-ribbon optical phase modulator, a Fourier plane optical phase discriminator, and several lenses. Specifically, in Fig. 6, laser 610 is a diode laser that emits light in the infrared. Light from the laser is focused (via x-axis relay cylinder lens 615 and field lens 620) on a linear- array MEMS-ribbon optical phase modulator that is packaged with driver electronics 625. In an embodiment, the MEMS ribbon array measures approximately 0.5 mm x 0.2 mm x 0.1 mm and each ribbon measures approximately 200 μιτι x 4 μιη x 0.1 μηη; they are not visible in Fig. 6. Light reflected by the phase modulator is incident upon an apodizing filter 630 that acts as an
optical phase discriminator. The discriminator converts phase modulation imparted by the MEMS-ribbon linear-array into amplitude modulation. Projection lenses (y-axis projection cylinder 635 and x-axis projection cylinder 640) and fixed mirror 645 then project the light toward an object to be measured.
[33] Fig 7 illustrates optical principles for a 3D depth capture system projector such as the system of Fig. 6. In Fig. 7, field lens 720 is placed near a 1-D (i.e. "linear") MEMS ribbon array 725. The focal length of the field lens is f. A Fourier plane phase discriminator 730 is placed f away from the field lens on the opposite side of the lens from the MEMS ribbon array.
Projection lenses 735 project pattern 750. Pattern 750 is a two-dimensional pattern having spatial variations in one dimension.
[34] Fourier plane phase discriminators are discussed, for example, in US 7,940,448 issued 05/10/2011 to Bloom, et al., the disclosure of which is incorporated by reference*. In particular the phase discriminator used here is similar to the cosine (phase similarity) apodizing filter of Figs. 10B, 16A and 18 of the '448 patent. In an embodiment, a Schlieren slit approximates the central portion of a cosinusoidal transmission function. The period of the apodizing filter (or width of the Schlieren slit) is chosen to be commensurate with the spacing of ribbons in the MEMS linear array.
[35] The optical system described so far projects a pattern having spatial sinusoidal intensity variation in one dimension. The pattern also appears to have temporal sinusoidal intensity variation. Temporal modulation of the pattern is produced using digital techniques, however. Digital temporal modulation appears as smoothly varying sinusoidal modulation when viewed with a camera having an integration time substantially longer than digital modulation period. These principles may be understood with reference to Fig. 8 which is a conceptual block diagram of electronic and optical signals in a compact 3D depth capture system.
[36] Fig. 8 illustrates how the appearance of sinusoidal, temporal light modulation is produced. Graph 805 represents the desired, sinusoidal variation of light intensity as a function of time. This function is stored or generated in 3D system driver/interface 810 as a pulse density modulation, digital waveform 815. Ribbon drive pattern waveform 815 alternates
This incorporation is attached herein as "Appendix A" on page 124.
between two values, 1 and 0, as time passes. The waveform shown at 815 is a 64 times oversampled pulse density representation of a sine wave, chosen for ease of illustration. In an embodiment, a 2048 times oversampled pulse density representation of a sine wave is used.
[37] A driver/interface sends electrical drive signals to ribbons in a MEMS array. Whenever the drive pattern value is "1", ribbon pairs in the array are set to equal deflection as shown at 820. Whenever the drive pattern values is "0", ribbon pairs in the array are set to unequal deflection as shown at 825. In any particular pair of ribbons, one ribbon may be at rest all the time, while the other is set to one of two possible deflections: the same as the other ribbon's rest state, or an activated state deflection. Because the ribbon drive pattern is digital, ribbons are never set to intermediate deflections.
[38] Ribbon pairs having the same deflection, as in 820, lead to bright stripes in a 1-D pattern, while ribbon pairs having unequal deflection, as in 825, lead to dark stripes in a 1-D pattern. The brightness at a particular pixel of the pattern versus time is shown at 830. The actual light output intensity (digital light output 830) versus time is a digital function; in fact it is the same pulse density modulation representation as ribbon drive pattern 815. Digital ribbon operation eliminates having to accommodate a pixel transfer function expressing light output versus ribbon drive signal in. The high reconfiguration speed of MEMS ribbons allows them to follow the digital modulation signal faithfully.
[39] When the digital light output at a pixel is viewed with a camera 835 having an integration time substantially longer than the digital period (i.e. the shortest time that the digital pattern remains at "1" or "0"), the time variation of the pattern appears sinusoidal, as shown at 840. This is an example of Σ-Δ modulation where the camera integration time is a low pass filter. Here "substantially" longer means more than five times longer. In an embodiment, 2048 reconfigurations of a projected light pattern take place in the same time as four camera exposure cycles; thus the projector rate is more than 500 times faster than the camera cycle rate. In general, the projector rate is faster than 100 times the camera cycle rate.
[40] When ribbons in a MEMS array are arranged in pairs, ribbons having the same deflection may produce maximum light output, while ribbons having different deflections may
produce minimum light output from a projection system. However, alternate phase discriminator designs are possible that reverse this behavior; i.e. ribbons having the same deflection produce minimum light output, while ribbons having different deflections produce maximum light output. With both of these approaches, the number of pixels (which may be spread into strips via anamorphic optics) is half of the number of ribbons.
[41] Ribbons may also be operated such that transitions between ribbons determine pixel output. In this case, the number of pixels is equal to the number of ribbons. An optical phase discriminator may be designed for this case as discussed in the '448 patent.
[42] Digital light output 830 and temporal intensity variation 840 describe actual and perceived light intensity versus time at one pixel in a projected 2-D pattern. The pattern also has sinusoidal variation in one spatial dimension.
[43] Sinusoidal spatial variation is produced by delaying digital signals such as 815 to each successive active ribbon in an array. For example, pattern 815 may be sent to the first active ribbon in the array. A short time, At, later the same pattern is sent to the second active ribbon. A short time, At, later the same pattern is sent to the third active ribbon, etc. Here, At is shorter than the time for once cycle of pattern 815.
[44] Small values for At lead to low-spatial-frequency projected patterns which provide coarse depth resolution and larger distance between depth ambiguities. Larger values for At, on the other hand, lead to high-spatial-frequency projected patterns which provide fine depth resolution, but smaller distance between depth ambiguities. Digital patterns such as 815 may be generated on-the-fly or retrieved from memory. In the latter case, time offsets At may be generated through memory addressing schemes. Figs. 9A and 9B illustrate memory addressing strategies for generating digital ribbon data signals.
[45] Fig. 9A shows an example 905 in which a digital pattern (analogous to pattern 815 in Fig. 8) is read from memory and supplied to 32 ribbon drive lines simultaneously. The pattern is 4096 bits long. Counter 910 selects 32-bit drive signals from a 32 bit by 4096 bit memory space. After the 4096th 32-bit signal is read, counter 910 returns to the first 32-bit signal. The 4096 bit pattern is the same for each active ribbon, but is offset for successive ribbons. The placement
of l's in the example is intended to indicate that successive active ribbons are driven by copies of the 4096 bit pattern offset by one bit per clock cycle. The strategy of Fig. 9A uses one memory address counter to access 32-bit wide data. To change the spatial frequency of light patterns produced from a MEMS-ribbon-based projector driven by data using the strategy of Fig. 9A, the data in the 32-bit by 4096-bit memory space must be updated to change the offsets between active ribbons. Fig. 9A shows data for pixels (e.g. "Pix3") and for "Bias" and
"Substrate" signals. These additional signals are described below.
[46] For some depth capture applications, it is desirable to be able to change the spatial frequency of projected patterns rapidly. This allows quick switching between high-precision depth data acquisition (with short ambiguity distance) and low-precision depth data acquisition (with long ambiguity distance).
[47] Fig. 9B shows an example 915 in which a digital pattern (analogous to pattern 815 in Fig. 8) is read from memory using separate counters. Only three counters are shown for ease of illustration, one for each ribbon drive line. If 32 ribbon drive lines are to be addressed as in Fig. 9A, then 32 counters are needed. In this example, each counter addresses one, 1 bit by 4096 bit memory space 925. At any particular time, the counters retrieve data from different bits in the 4096 bit sequence. Changing spatial frequency is now just a matter of setting the counters to read data a different number of bits apart.
[48] A projected pattern usually has several periods of sinusoidal spatial intensity variation rather than just one. That being the case, it is not necessary to produce as many ribbon drive signals as there are ribbons. Suppose for example that a ribbon array has N active ribbons and that it is used to project a pattern having K spatial periods. Only l\l/K different ribbon drive signals are necessary to produce the pattern. This leads to a simplification in wiring that may be understood with reference to Fig. 10 which illustrates a MEMS ribbon wiring diagram.
[49] In Fig. 10, ribbons 1005 are part of a linear-array MEMS-ribbon light modulator. A signal line 1010 carries a digital drive signal to the zeroth, kth, 2kth, etc., active ribbon in the array. Similarly, another signal line carries another digital drive signal to the first, (k+l)th, (2k+l)th, etc., active ribbon. Another signal line carries yet another digital drive signal to the second,
(k+2)th, (2k+2)th, etc., active ribbon. While there may be N active ribbons in the array, only N/K signal lines are required to drive it. This leads to a considerable simplification in MEMS signal line layout. The lowest spatial frequency pattern that can be produced using this scheme has N/K periods rather than the one-period pattern that could be achieved if each active ribbon were addressed separately. In an embodiment, a MEMS ribbon array has 128 active ribbons with every 32nd ribbon addressed together (i.e. N=128, K=32 in the notation used above). Thus the minimum number of spatial periods in a projected pattern is four.
[50] In an embodiment, ribbon drive electronics use low-voltage ribbon drive techniques described in co-pending US application number 13/657,530 filed on 10/22/2012 by Bloom et al., the disclosure of which is incorporated by reference*. As described in the '530 application, low-voltage ribbon drive schemes used here are based on a DC bias voltage and a low-voltage ribbon control signal added in series to take advantage of ribbon nonlinear displacement characteristics. Low-voltage ribbon drive techniques make the ribbon driver electronics compatible with CMOS digital electronics commonly found in mobile devices. The bias voltage is represented by the bias signal mentioned in connection with Fig. 9.
[51] In an embodiment, ribbon drive electronics also use pseudo-bipolar ribbon drive techniques described in US 8,368,984 issued to Yeh and Bloom on 02/05/2013, the disclosure of which is incorporated by reference*. As described in the '984 patent, pseudo-bipolar ribbon drive schemes used here are designed to avoid difficulties that might otherwise arise when unipolar CMOS electronics are used to drive MEMS ribbon devices. In particular, pseudo- bipolar operation reduces or eliminates surface charge accumulation effects in MEMS ribbon devices. The substrate voltage is represented by the substrate signal mentioned in connection with Fig. 9.
[52] A compact 3D depth capture system may operate continuously or in bursts. Fig. 11 is a graph illustrating power consumption during system burst mode operation. The graph plots depth capture system power, P, versus time. Most of the system power is consumed by the laser light source, although operating a camera at faster cycle rates also increases power consumption. In the example of Fig. 11, in one mode of operation, the laser, projector system and camera operate continuously and consume one unit of power. In a second mode of
*This incorporation is attached herein as "Appendix A" on page 20.
This incorporation is attached herein as "Appendix B" on page 75.
operation, "burst mode", the system operates at low duty cycle; it is on for only one unit of time out of eight. During that one unit "on" time, however, the system (primarily the laser) consumes eight units of power. The average power consumed is the same in both cases.
[53] In the example of Fig. 11, the camera collects the same number of photons emitted by the projector system in burst mode or average mode. However, in burst mode, the number of background photons collected by the camera is reduced by a factor of eight because its shutter is open only 1/8 of the time. The signal to noise ratio is therefore improved by a factor of approximately V8. Furthermore, burst mode reduces motion artifacts if the camera runs at a faster cycle rate. Of course, the 1:8 or 12.5% duty cycle of Fig. 11 is only an example. A compact 3D depth capture system may be operated with duty cycles ranging from 100% to less than 1%, for example.
[54] A compact 3D depth capture system projects a two-dimensional pattern having sinusoidal spatial variation in one dimension. The pattern also has sinusoidal variation in time. Fig. 12 is a diagram illustrating the relationship between spatial pattern phase and depth. In Fig. 12, item "P" is a 3D depth capture system pattern projector while item "C" is a camera. The camera is separated from the projector by a baseline distance, d. Projector P emits a light pattern that is represented in the figure by a radiating fan of stripes 1205. The stripe pattern in the figure has an "on/off" or square wave shape for ease of illustration; an actual pattern has sinusoidal spatial variation. ΔΘ is the angular extent of one period of the pattern. It may be estimated, for example, by dividing the angular field of view of the camera by the number of pattern periods visible to the camera.
[55] Consider a distance z
0 along the z-axis as shown in the figure. The distance to this point from the projector is r. The spatial frequency of the pattern at the point is described by a (reciprocal space) k vector pointing in the k direction as shown in the figure. The magnitude of the k vector is given by \k\ =— -. The z component of the k vector, k
z, is given by:
The approximation is valid for d
[57] Suppose that the minimum detectable change in spatial phase that the camera can detect is A(pmin. This change in phase corresponds to a change in distance according to: z ~ - \k\ ' ά
[58] Given the relationship between pattern spatial phase and distance, one can see that a 3D depth capture system can estimate distance to an object (appearing at a pixel in an image captured by its camera) by estimating the spatial phase of the pattern at the object. The intensity, /, of the pattern has the form:
/ I I 1 1
— = - + - cos(o)t - kzz) = - + - cos(cot - ø)
1Q Z Z Z Z
[59] Here Io is the maximum pattern intensity, ω is the angular frequency of the pattern temporal modulation, t is time, and φ is the spatial phase. Fig. 13 is a graph illustrating phase measurements at two pixels in a camera. The pixels are identified by square and triangle subscripts. Pattern intensity at the first pixel (square subscript) is shown by the solid curve while pattern intensity at the second pixel (triangle subscript) is shown by the dashed curve.
[60] Measurements of the pattern intensity curves for each pixel are obtained from a 3D depth capture system camera such as cameras 115, 215, 315, 420, 515, 835 or camera "C" in Fig. 12. In an embodiment, a rolling shutter CMOS camera is used to obtain pattern spatial phase measurements. Data from the rolling shutter camera is read out row-by-row
continuously. Data from the first row is read out immediately following data from the last row of the camera image sensor. A camera cycle time is the time required to read out all the rows. The camera generates a timing signal that is synchronized to its rolling shutter cycles. This timing signal may be used as a trigger for projector pattern cycles. The camera has an inherent integration time associated with collecting photons at each pixel. All columns in a given row of image sensor pixels collect light simultaneously during the integration time. The integration time effectively imposes a low-pass filter on the high-speed digital light modulation signal that is projected by the MEMS ribbon-array projector. Thus the pulse density modulated light signal appears to have sinusoidal time variation when measured by the camera.
[61] The phase φ of the pattern appearing on an object, and the corresponding depth of the object, is estimated on a per pixel basis; each measurement is independent of measurements made at other pixels. The phase is estimated by synchronous detection (i.e. sampling at regular intervals) of the pattern modulation signal. In-phase and quadrature measurements, referenced to the camera timing signal provide data necessary to estimate pattern signal phase.
[62] In the example of Fig. 13, measurements are performed at tot = 0, π/2, π, 3π/2, 2π, etc., or four times per sine wave period. An in-phase measurement of the pattern intensity at the first pixel (square subscript) is obtained by subtracting the measurement made at cot = π from the one made at ωΐ = 0. A quadrature measurement of the pattern is obtained by subtracting the measurement made at ωΐ = 3π/2 from the one made at ωΐ = π/2. The phase is then the arctangent of the ratio of in-phase and quadrature measurements. The same procedure yields the phase at a second pixel (triangle subscript). Similar calculations are performed for in-phase and quadrature measurements made at each pixel.
[63] Although four measurements are needed to obtain the phase, phase estimates may be updated whenever a new measurement is available as shown in Fig. 14 which illustrates depth data update cycles. In Fig. 14, "1", "2", "3", etc. in squares represent intensity measurements made every time cot advances by π/2 as shown in Fig. 13. A phase estimate 0nbased on in- phase (li) and quadrature ((¾) values is available after four measurements (0 - 3). When the fifth ("4") measurement becomes available a new in-phase (l2) value is computed. Phase 0n+1may then be estimated as the arctangent of l2/ Qi. Each new measurement leads to a new in-phase or quadrature value that replaces the previous one.
[64] Phase information may also be estimated from more than one cycle of data. The most recent 8 or 12 data points may be used for example. A finite impulse response filter may be applied to weight more recent measurements more heavily than older ones.
[65] Fig. 15 shows an example of compact 3D depth capture system output data 1505. The actual data are 3D points (x, y, z). A 3D impression or rendering is created in the plane image of Fig. 15 via simulated lighting and shading effects. To recap, a set of data points such as the face shown in Fig. 15 is obtained as follows: A 3D system projector projects a pattern of infrared
light onto the object. The pattern has sinusoidal variation in time and in one spatial dimension. A camera samples the pattern to obtain its phase at each pixel across an image. Depth information is computed from the phase according to geometric relationships that depend, in part, on the separation of the camera from the projector.
[66] The projector is contained in a very small volume, compatible with integration into mobile devices such as smart phones and tables. In an embodiment, a projector fit inside a 3 mm x 6mm x 10 mm volume. The camera may be similar to cameras already integrated in mobile devices. In an embodiment, the camera is a rolling shutter CMOS camera sensitive in the infrared.
[67] Several variations of the compact 3D depth capture system described so far are possible. 3D system pattern projectors or cameras, or both, may be changed in some embodiments.
[68] Different kinds of linear-array MEMS-ribbon light modulators may be used in the pattern projector. Light modulators described in the '448 patent or in US patent 7,286,277 issued to Bloom on 10/23/2007 (the disclosure of which is incorporated by reference*) are examples of such alternatives. In addition, MEMS 2D array modulators, such as the Texas Instruments Digital Mirror Device, may be operated to produce patterns having one dimensional spatial variation as described above.
[69] As an alternative to MEMS light modulators, a linear array of light emitters, such as vertical-cavity surface-emitting lasers (VCSELs) may be used to produce patterns with one dimensional spatial variation. A ferroelectric liquid crystal array modulating a continuous light source is another possibility. Further, the light source may be a light emitting diode rather than a laser.
[70] A global shutter camera may be used instead of a rolling shutter camera, recognizing that some potential data acquisition time is wasted during camera reset. The projector light source may be turned off during global shutter reset times.
[71] With either a rolling shutter or global shutter camera, phase estimates may be obtained using three, rather than four measurements per pattern temporal period. When three pattern
*This incorporation is attached herein as "Appendix C" on page 104.
intensity measurements, l1( l2, l3 are made, spaced apart by ωΐ = 2π/3, phase is estimated according to:
3(/! - 3)
φ = tan
[2I2 - I, - I3i
[72] Projector and camera cycles may be triggered from a common timing signal or a signal derived from either the projector or camera clock frequency.
[73] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
[74] All elements, parts and steps described herein are preferably included. It is to be understood that any of these elements, parts and steps may be replaced by other elements, parts and steps or deleted altogether as will be obvious to those skilled in the art.
[75] Broadly, this writing discloses at least the following:
Compact 3D depth capture systems are based on a 3D system driver/interface, a 3D system camera, and a 3D system projector. The systems are compatible with integration in to mobile electronic devices such as smart phones and tablet computers.
Concepts
»>This version has forward references; e.g. Concept X: The system of Concept X-1 or Concept X+l ... The forward references are intended for the better understanding of this disclosure of technology.
Concepts disclosed in numbered format:
1. A three-dimensional depth capture system comprising: a pattern projector; a camera; and, a mobile device driver/interface; the projector, camera and driver/interface integrated into a mobile electronic device, the projector comprising a digital, linear-array MEMS ribbon light modulator and a lens system that projects two-dimensional images having spatial variation in only one dimension, and, the MEMS ribbon light modulator being driven by digital electronic signals from the driver/interface, the signals expressing a pulse density modulation representation of a sine wave, the sine wave characterized by a temporal period.
2. The system of Concept 1, the camera sampling the images three times per period.
3. The system of Concept 2, the mobile device driver/interface providing depth data based on the three most recent camera samples to an application processor in the mobile electronic device.
4. The system of Concept 1, the camera sampling the images four times per period.
The system of Concept 4, the mobile device driver/interface providing depth data based on the four most recent camera samples to an application processor in the mobile electronic device. The system of any of Concepts 1 - 5 or Concepts 7 - 26, the pulse density modulation representation oversampling the sine wave by at least 64 times. The system of any of Concepts 1 - 6 or Concepts 10 - 26, the digital signals being stored in a circular memory buffer. The system of Concept 7, the images having a spatial frequency that is selected by relative offsets into the memory buffer. The system of Concept 7, signals for adjacent active ribbons in the array obtained from offset memory addresses in the memory buffer. The system of any of Concepts 1 - 9 or Concepts 11 - 26, the camera having an integration time such that the images appear to have sinusoidal temporal intensity variation when viewed by the camera. The system of any of Concepts 1 - 10 or Concepts 12 - 26, the digital signals being compatible with CMOS logic levels. The system of any of Concepts 1 - 11 or Concepts 13 - 26, the digital electronic signals obeying a pseudo bipolar MEMS drive scheme. The system of any of Concepts 1 - 12 or Concepts 14 - 26, the digital electronic signals comprising a DC bias voltage and a low-voltage ribbon control signal added in series to take advantage of ribbon nonlinear displacement characteristics. The system of any of Concepts 1 - 13 or Concepts 16 - 26, the linear-array MEMS ribbon light modulator having N active ribbons and K addressing lines, N being an integer multiple of K. The system of Concept 14, the images characterized by N/K cycles of a spatial period.
The system of any of Concepts 1 - 15 or Concepts 18 - 26, the camera being a rolling shutter CMOS camera. The system of any of Concepts 1 - 15 or Concepts 18 - 26, the camera being a global shutter camera. The system of any of Concepts 1 - 17 or Concepts 20 - 26, the projector comprising a diode- laser infrared light source. The system of any of Concepts 1 - 17 or Concepts 20 - 26, the projector comprising an infrared light emitting diode. The system of any of Concepts 1 - 19 or Concepts 21 - 26, the projector and camera sharing a common timing signal. The system of any of Concepts 1 - 20 or Concepts 22 - 26, the projector and camera having an operating duty cycle less than 100%. The system of any of Concepts 1 - 21 or Concepts 23 - 26, the projector fitting into a volume 3 mm by 6 mm by 10 mm or smaller. The system of any of Concepts 1 - 22 or Concepts 24 - 26, the mobile device
driver/interface providing depth data to an application processor in the mobile electronic device. The system of any of Concepts 1 - 23 or Concepts 25 - 26, the mobile device
driver/interface providing depth data, based on a finite impulse response filter function of recent camera samples, to an application processor in the mobile electronic device. The system of any of Concepts 1 - 24, the mobile electronic device being a cell phone. The system of any of Concepts 1 - 24, the mobile electronic device being a tablet computer. A three-dimensional depth capture system comprising: a pattern projector; a camera; and, a mobile device driver/interface;
the projector, camera and driver/interface integrated into a mobile electronic device, the projector comprising a spatial light modulator and a lens system that projects two- dimensional images having spatial variation in only one dimension, and, the light modulator being driven by electronic signals from the driver/interface, the signals expressing a sine wave, the sine wave characterized by a temporal period. The system of Concept 27, the light modulator comprising an array of vertical-cavity surface-emitting lasers. The system of Concept 27, the light modulator comprising ferroelectric liquid crystals. The system of Concept 27, the light modulator comprising a light emitting diode. The system of any of Concepts 27 - 30 or Concepts 33 - 38, the projector driven by digital data signals. The system of any of Concepts 27 - 30 or Concepts 33 - 38, the projector driven by analog data signals. The system of any of Concepts 27 - 32 or Concepts 37 - 38, the camera sampling the images three times per period. The system of Concept 33, the mobile device driver/interface providing depth data based on the three most recent camera samples to an application processor in the mobile electronic device. The system of any of Concepts 27 - 32 or Concepts 37 - 38, the camera sampling the images four times per period. The system of Concept 35, the mobile device driver/interface providing depth data based on the four most recent camera samples to an application processor in the mobile electronic device. The system of any of Concepts 27 - 36, the camera being a rolling shutter CMOS camera. The system of any of Concepts 27 - 36, the camera being a global shutter camera.
Appendix A (US 7,940,448)
Technical Field
[01] The disclosure is generally related to the fields of optical display systems and optical micro-electromechanical systems (MEMS) devices.
Background
[02] Projection high-definition television (HDTV), high-resolution printing, and maskless semiconductor lithography are a few examples of applications of high- resolution optical display technology. In each case a one- or two-dimensional array of optical modulators and a companion optical system distribute light into millions of pixels that form an image. Some common types of optical modulators are digital mirror devices, grating light modulators, polarization light modulators, liquid crystals and liquid crystal on silicon panels. Depending on their design, these optical modulators may operate in reflective or transmissive modes.
[03] MEMS ribbon structures are used in several types of optical modulators and, despite their simplicity, have spawned dozens of new designs for optical image forming systems. The evolution of optical ideas has led to systems that depend on fewer and fewer ribbons to create each pixel in the final image. Early
Appendix A grating light modulators used as many as six ribbons per pixel, for example, while polarization light modulators have been demonstrated with two ribbons per pixel.
[04] MEMS ribbon structures most often appear in linear-array light modulators. Linear arrays "paint" two-dimensional images when their line-image output is swept back and forth by a scanner. Linear arrays take up far less chip real estate than two-dimensional arrays and are more closely matched to the etendue of laser light sources. If linear arrays could be made shorter by reducing the number of ribbons required to form a pixel, even more compact MEMS light modulator chips could be made.
Brief Description of the Drawings
[05] Fig. 1 shows a block diagram of a display system.
[06] Fig. 2 shows examples of reflective and transmissive linear array phase modulators.
[07] Figs. 3A and 3B show a micromechanical ribbon.
[08] Figs. 4A, 4B and 4C show optical arrangements with reflective and transmissive modulators.
[09] Fig. 5 shows a display system based on a linear array phase modulator.
[010] Fig. 6 shows an example of a coding scheme for a digital phase modulator.
[01 1] Fig. 7 shows a second example of a coding scheme for a digital phase modulator.
Appendix A
[012] Fig. 8 shows an example of coding for an analog phase modulator.
[013] Fig. 9A and 9 B are graphs that aid understanding of digital and analog phase modulation schemes.
[014] Figs. 10A and 10B illustrate relationships between exemplary object-plane and Fourier-plane discriminators for polarized and unpolarized light.
[015] Figs. 11 A and 11 B show optical systems and Fourier-plane filter response functions.
[016] Figs. 12A and 12B show exemplary optical systems for polarized light.
[017] Fig. 13 shows a Savart plate.
[018] Fig. 14 illustrates polarization relationships in a Savart plate phase discriminator.
[019] Figs. 15A and 15B show an exemplary modulator - discriminator arrangement.
[020] Figs. 16A and 16B show exemplary optical systems for unpolarized light.
[021] Fig. 17 illustrates various functional relationships to aid understanding of Fourier-plane discriminators.
[022] Fig. 18 illustrates various functional relationships to aid understanding of Fourier-plane discriminators.
[023] Fig. 19 shows the effect of shifting the output of a linear array on alternate scans.
[024] Fig. 20 shows one possible way to interleave successive scans of a linear array display system.
Appendix A
[025] Fig. 21 illustrates how finite differencing may be applied to two- dimensional phase modulator arrays.
[026] Figs. 22A and 22B show experimentally obtained images of light output from linear array phase modulators.
Detailed Description
[027] Fig. 1 shows a block diagram of a display system. The system includes light source 105, phase modulator 110, optical system 115, and line scanner 120. Light source 105 is a laser, light emitting diode, arc lamp or other bright light source. Phase modulator 1 10 includes a linear array of elements that change the phase of light; the elements may operate in transmission or reflection.
Optical system 1 15 creates pixels corresponding to phase steps created by the phase modulator. The optical system can be configured such that these phase steps correspond to either bright or dark pixels. By definition, phase repeats every 2π; therefore the maximum phase step is π. Further, digital (one bright level and one dark level) or analog (many gray levels) operation is possible. Line scanner 120 "paints" two-dimensional images by scanning a line image back and forth.
[028] The optical system acts as a phase edge discriminator. If the linear array of phase elements is set such that it presents a single phase edge - either a step increase or step decrease in phase - then a single-pixel line image is created. A linear array having n elements can be programmed with n edges corresponding
Appendix A to n pixels. (The number of elements could be n + 1 depending on how elements at the end of an array are counted, but we still refer to the number of elements and pixels as being "the same".) In the case of a ribbon modulator, only one ribbon per pixel is needed. Furthermore, it is shown below that advanced techniques make possible twice as many pixels as ribbons; i.e. "half a ribbon per pixel.
[029] The display system is now described in detail. First modulator and display basics are presented, followed by details of coding schemes to translate image data into phase settings for modulator elements. Then a novel optical discriminator system is described. Finally, advanced techniques, extensions to two-dimensional modulators, and experimental results are discussed.
[030] Modulator and display basics
[031] Fig. 2 shows examples of reflective and transmissive linear array phase modulators. Phase modulator 205 contains a large number of individual reflective elements such as elements 206, 207 and 208. Although only a few dozen elements are shown in the figure, an actual modulator may contain hundreds or thousands of elements. In HDTV applications, for example, the number of elements is often between 1 ,000 and 5,000. Each element in modulator 205 reflects light. A difference in the phase of light reflected by adjacent modulator elements may be achieved by setting the elements to different heights; i.e. moving them perpendicular to the plane of the figure.
Reflective elements do not have to be moveable, however; they can be realized as liquid-crystal-on-silicon elements, for example. The elements in modulator
Appendix A
205 are drawn as rectangles; however, they may be square or have other shapes.
[032] Phase modulator 210 is very similar to phase modulator 205 except that the individual elements of the modulator, such as elements 21 1 , 212 and 213, are transmissive. These transmissive elements impart a variable phase to light passing through them. The transmissive elements are not necessarily movable; they may be realized as liquid crystals, for example. Other aspects of phase modulator 210, such as the number and shape of the elements are the same as in the case of phase modulator 205.
[033] Figs. 3A and 3B show a micromechanical ribbon which is an example of a MEMS structure that may be used to make elements in a reflective phase modulator such as modulator 205. Fig. 3A shows a ribbon in an undetected state while Fig. 3B shows a ribbon deflected by a distance Δζ.
[034] In Figs. 3A and 3B MEMS ribbon 305 is supported by supports 310 near a substrate 320. In typical applications the dimensions of ribbon are approximately 100 pm long (i.e. in the y direction), 10 pm wide (i.e. in the x direction) and 0.1 pm thick (i.e. in the z direction). These dimensions may vary greatly in different designs however. It would not be unusual for any of the measurements just mentioned to be five times greater or smaller, for example.
[035] Fig. 3A shows light beam 350 reflecting off ribbon 305 and returning as light beam 355. If light beam 350 reflects off a ribbon deflected by a distance Δζ (as shown in Fig. 3B), then the phase of reflected beam 355 is changed by
2 (y) Az where λ is the wavelength of the light. If adjacent ribbons in an array
Appendix A are deflected by distances Δζ-ι and Δζ
2, then the phase difference between light
[036] Deflections of ribbon 305 in the z direction may be achieved by applying a voltage between the ribbon and substrate 320. Depending on the size of the ribbon, its height may be adjusted in as little as a few nanoseconds.
[037] Dotted arrow 360 indicates the path that light beam 350 would follow if ribbon 305 were transmissive rather than reflective. It is often useful to draw optical systems unfolded at reflective surfaces; i.e. it can be useful to draw light beams as if they were transmitted by reflective surfaces.
[038] Figs. 4A, 4B and 4C show optical arrangements with reflective and transmissive modulators. In Fig. 4A input light beam 405 is modulated by reflective phase modulator 410 to form output light beam 415. Steering mirrors 420 and 422 direct light beams to and from the modulator at nearly, but not exactly, normal incidence to the modulator. In Fig. 4B input light beam 425 is modulated by reflective modulator 430 to form output light beam 435. Output beam 435 is separated from input beam 425 by beam splitter 440. The arrangement shown in Fig. 4B allows the modulator to be illuminated at normal incidence. In Fig. 4C input light beam 445 is modulated by transmission modulator 450 to form output beam 455.
[039] Fig. 5 shows a display system based on a linear array phase modulator. In Fig. 5, light source 505 emits light that passes through lens 510 before being reflected or transmitted by linear array phase modulator 515. Phase modulator 515 may be a reflective or transmissive modulator; in the figure it is drawn as if it
Appendix A were transmissive and slight offsets in the linear array of elements shown schematically in inset 520 indicate phase differences imparted to the transmitted light. Optical system 525 converts the phase differences into a line image that is reflected by scanner 530. Finally scanner 530 sweeps line image 535 across a surface 540 for viewing or printing. As examples, in a projection display, surface 540 could be a viewing screen while in a lithography system surface 540 could be a wafer coated with photoresist. In lithography and other printing applications an alternative to the scanning mirror is to move the surface rather than scanning the line image. In some systems a rotating prism may replace the scanning mirror.
[040] Thus, a display system includes a light source, phase modulator, optical system and line scanner. The phase modulator contains a linear array of transmissive or reflective elements such as MEMS ribbons or liquid crystal modulators and the elements may be illuminated at normal incidence or off-axis. An optical system converts phase differences created by adjacent modulator elements into brightness variations among pixels in a line image. The line image may be scanned to form a two-dimensional image.
[041] Coding schemes
[042] Image data, meaning the brightness of individual pixels in an image, is sent to a display system for display. Coding schemes are used to translate the image data into phase settings for modulator elements in a linear array phase modulator. The coding schemes described here are analogous to non-return-to- zero, inverted (NRZI) coding used in digital signal transmission. NRZI is a
Appendix A method of mapping a binary signal to a two-level physical signal. An NRZI signal has a transition at a clock boundary if the bit being transmitted is a logical one, and does not have a transition if the bit being transmitted is a logical zero. (NRZI can also take the opposite convention in which a logical zero is encoded by a transition and a logical one is encoded by a steady level.)
[043] Fig. 6 shows an example of a coding scheme for a digital phase modulator. A digital phase modulator creates dark or bright pixels from phase differences of 0 or π between adjacent modulator elements. Gray scale, if desired, may be achieved via time domain techniques such as pulse width modulation
[044] In Fig. 6 data is shown as bits such as bits 605, 606 and 607 while the phase of modulator elements is represented by line segments such as 610, 61 1 and 612. The line segments appear in either of two positions representing a phase difference of π between them. Consider bit 615, a "1" indicating a bright pixel. This bit is coded as a π phase shift between modulator elements 610 and 61 1. It makes no difference which of elements 610 and 611 is "up" or "down"; only the difference in their phases matters. Notice for example that bit 607 is also a "1 " yet it is represented by elements in the opposite configuration from 610 and 61 1. "0" bits are represented by adjacent elements in the same
configuration.
[045] Logic diagram 620 shows one way to formally specify the phase of the next element in a linear array phase modulator. Suppose for example that a string of dark bits ("0") has been encoded by a string of phase modulator
Appendix A elements all having the same phase. How should a subsequent bright bit ("1") be represented? Diagram 620 shows that the phase "bk" of the next modulator element is determined from the value of the bit "ak" ("1" in this example) and the phase of the previous element ("bk-i" where phase is normalized such that π = 1 ). This is equivalent to saying "create a phase edge for a bright pixel, keep the phase constant for a dark pixel". That is why the line segments in the figure straddle the dotted line boundaries between bits.
[046] Fig. 7 shows a second example of a coding scheme for a digital phase modulator. This example is the same as that shown in Fig. 6 except that the coding rule has changed to "flip the phase for a dark pixel, keep it the same for a bright pixel". The choice of whether to use the coding scheme of Fig. 6 or Fig. 7 in a digital modulator depends on the optical discriminator system which translates phase differences into bright or dark pixels.
[047] Fig. 8 shows an example of coding for an analog phase modulator. As in Figs. 6 and 7, positions of line segments represent the phase of modulator elements. However in analog coding the elements are adjustable through a range of 2ττ. In this example, greater phase differences correspond to greater pixel brightness. "0", meaning dark, pixels are represented by 0 phase differences between adjacent pixels. Brightness "a" is represented by a small phase difference while brightness "b" is represented by a greater phase difference.
[048] Fig. 9A and 9 B are graphs that aid understanding of digital and analog phase modulation schemes. Fig. 9A pertains to digital modulation described in
Appendix A connection with Figs. 6 and 7 while Fig. 9B pertains to analog modulation described in connection with Fig. 8. In Fig. 9A a sequence of digital bits ("1 1 1 0 1 0") is shown along the horizontal axis while the phase of modulator elements is shown along the vertical axis. Bits appear at boundaries between modulator elements and the case illustrated is the one in which "1" is represented by a phase difference while "0" is represented by no change in phase. Therefore vertical lines appear in the graph whenever a "1 " is desired as the displayed output.
[049] At the first "1 " bit (counting from the left) a π phase difference between the first and second modulator elements is shown by a vertical line. At the second "1" bit a dotted line and curved arrow indicate that the desired phase difference of π between the second and third modulator elements may be achieved either by creating a phase difference of 2π between the first and third modulator elements or by a zero phase difference between those elements. Since 0 and 2π phase shifts are the same, elements in a reflective digital phase modulator need not move more than λ/4 where λ is the wavelength of light.
[050] In Fig. 9B a sequence of analog pixel brightnesses ("a b c d e f") is shown along the horizontal axis while the phase of modulator elements is shown along the vertical axis. Pixels are created from phase differences at boundaries between modulator elements; greater phase differences correspond to greater pixel brightness in this example, but the opposite could be true depending on the optical discriminator system used. Vertical lines appear in the graph whenever a (non-dark) pixel is desired as the displayed output.
Appendix A
[051] Starting from the left, the phase of each element increases for pixels "a" and "b"; however, pixel "c" is created by the phase decrease between the third and fourth modulator elements. Pixel "c" would be equally bright if phase difference between the third and fourth modulator elements had been obtained from an increase in phase. The brightness of pixel "c" depends on the magnitude of the phase difference, not its sign.
[052] Pixel "e" is created by a phase increase from the fifth to the sixth modulator elements. However phase repeats every 2π by definition. The dotted line and curved arrow indicate how the phase of the sixth modulator element can be set to represent the desired phase increase modulo 2π. An alternative would be to set the phase of the sixth element to represent a phase decrease from the fifth element. The pixel brightness is the same even though the absolute phase of the sixth element is different in the two cases.
[053] Fig. 9B suggests that several coding strategies are possible for an analog phase modulator. One strategy is to always alternate the sign of the phase difference between modulator elements; i.e. if the transition between the previous two elements was a phase increase, the transition between the next two is a phase decrease. This strategy tends to direct light to high angles in an optical system. Another strategy is to always keep the sign of the phase difference between modulator elements the same; i.e. always increasing or always decreasing. Of course, phase "wraps around" at 2π. This strategy tends to direct light near the axis of an optical system. A third strategy is to randomly choose the sign of the phase difference between modulator elements.
Appendix A
[054] Four broad schemes for translating image data into phase settings for modulator elements have been described: digital and analog, and bright pixels corresponding to either greater or smaller phase edges in each case.
[055] Optical discriminator
[056] An optical system converts phase edges created by the modulator into a line image that may be scanned to form a two-dimensional image for viewing. This system functions as an optical phase discriminator that may take many forms because of duality between operations performed in the object plane and the Fourier plane of an optical system.
[057] Figs. 10A and 10B illustrate relationships between exemplary object-plane and Fourier-plane discriminators for polarized and unpolarized light. Fig. 10A shows that convolution in the object plane of an optical system is equivalent to multiplication in the Fourier plane of the system. This relationship can be used as a guide to create different phase discriminator systems. Fig. 10B shows that at least four different optical systems can be constructed using the relationship of Fig. 10A.
[058] For a system using polarized light, a discriminator may be based on a Savart plate and polarizers in the object plane or on a Wollaston prism and polarizers in the Fourier plane. For a system using unpolarized light, a discriminator may be based on a thick hologram in the object plane or on an apodizing filter in the Fourier plane. Details of these optical arrangements are described below; however, there are no doubt other optical schemes that follow from the relationship illustrated in Fig. 10A.
Appendix A
[059] Figs. 11 A and 11 B show optical systems and Fourier-plane filter response functions. In Fig. 11A line 1105 represents the object plane of the system while line 1115 represents the Fourier plane. The object and Fourier planes exist on opposite sides of, and one focal length away from, lens 110. Arrows 1120 and 1125 represent positive and negative delta functions respectively that are spaced apart by a small distance.
[060] Convolution of close-spaced negative and positive delta functions with a linear array of light modulator elements is equivalent to sampling the differences between phases of light coming from adjacent elements when the spacing between the delta functions less than or equal to the spacing between element centers. Therefore Fig. 11 A shows a system in which greater phase differences between adjacent elements in a phase modulator correspond to greater pixel brightness.
[061] In Fig. 1 1 B line 1 155 represents the object plane of the system while line 1165 represents the Fourier plane. The object and Fourier planes exist on opposite sides of, and one focal length away from, lens 1160. Arrows 1170 and 1175 represent two positive delta functions that are spaced apart by a small distance.
[062] Convolution of close-spaced positive delta functions with a linear array of light modulator elements is equivalent to sampling the similarities between phases of light coming from adjacent elements when the spacing between the delta functions less than or equal to the spacing between element centers.
Appendix A
Therefore Fig. 1 1 B shows a system in which smaller phase differences between adjacent elements in a phase modulator correspond to greater pixel brightness.
[063] According to Fig. 10A convolution in the object plane corresponds to multiplication in the Fourier plane. Therefore convolution with close-spaced positive and negative delta functions in the object plane corresponds to multiplication by a sine function in the Fourier plane as shown in Fig. 11A.
Convolution with close-spaced positive delta functions in the object plane corresponds to multiplication by a cosine function in the Fourier plane as shown in Fig. 11 B.
[064] Considering further the "sine" case shown in Fig. 1 1 A, as delta functions 1 120 and 1125 are brought closer together, sampling the differences between adjacent segments in the object plane looks more like differentiation and the sine function in the Fourier plane begins to approximate a line. Said another way differentiation in the object plane is equivalent to multiplication by a linear slope in the Fourier plane.
[065] The sine response in the Fourier plane is truncated in an optical system of finite lateral extent as drawn in Fig. 1 1A. The corresponding effect in the object plane is that delta functions 1120 and 1125 are broadened into sine functions. Finally, optical efficiency is greatest when the spacing between delta functions (or sine functions) in the object plane matches the spacing between phase modulator elements. In the case of Fig. 1 A pixel brightness is proportional to sin (q>/2) where φ is the phase difference between adjacent phase modulator elements. In the case of Fig. 1 1 B brightness is proportional to cos2(q>/2).
Appendix A
[066] Figs. 12A and 12B show exemplary optical systems for polarized light. The system of Fig. 12A uses Fourier plane optical elements while the system of Fig. 12B uses object plane optical elements. Either system may be configured to exhibit the sine response of Fig. 1 1 A or the cosine response of Fig. 11 B.
[067] In Fig. 12A, line segments 1205 represent elements of a linear array optical phase modulator. Lens 1210 is placed one focal length away from, and between, the phase modulator elements and a Wollaston prism 1215. The Wollaston prism is sandwiched by polarizers 1230 and 1235. In Fig. 12A polarizers 1230 and 1235 are illustrated with crossed polarization axes. This configuration yields the sine response of Fig. 11 A. If the polarizers were oriented with parallel polarization axes, the cosine response of Fig. 11 B would be obtained.
[068] In Fig. 12B, line segments 1255 represent elements of a linear array optical phase modulator. Lens 1260 is placed one focal length away from, and between, the phase modulator elements and Fourier plane 1265. Savart plate 1270 is placed between lens 1260 and phase modulator elements 1255. The Savart plate is sandwiched by polarizers 1280 and 1285. In Fig. 12B polarizers 1280 and 1285 are illustrated with crossed polarization axes. This configuration yields the sine response of Fig. 11 A. If the polarizers were oriented with parallel polarization axes, the cosine response of Fig. 1 1 B would be obtained.
[069] The system of Fig. 12B will now be considered in more detail. Fig. 13 shows a Savart plate 1305. A Savart plate is constructed from two birefringent plates with optical axes oriented 45° to the surface normal and rotated 90° with
Appendix A respect to each other. An incident light beam propagating through the first plate is resolved into ordinary and extraordinary beams which are displaced from each other. Upon entering the second plate, the ordinary beam becomes an extraordinary beam, and vice-versa. Two beams emerge from the Savart plate displaced along a diagonal by a distance, "d". The optical path difference between the two beams is zero for normal incidence.
[070] Fig. 14 illustrates polarization relationships in a Savart plate phase discriminator. Two scenarios are shown: "0" and "1". In scenario "0" Savart plate 1405 has optical axis 1410. Axes 1430 and 1435 represent polarization axes of polarizers placed on either side of the Savart plate; i.e. one closer to the viewer than the plate, one farther away. Consider a light beam at the position of dot 1420 and polarized along axis 1430. This light beam is split by the Savart plate into two component beams at the positions of dots 1415 and 1425 having polarizations perpendicular and parallel to optical axis 1410 respectively. When these two beams are analyzed by a polarizer oriented along axis 1435, no light passes. Scenario "0" describes the situation when adjacent elements of a phase modulator are viewed through a Savart plate sandwiched by crossed polarizers and the elements emit light in phase.
[071 ] In scenario "1" the light at dot 1425 arrives at the Savart plate with a π phase shift compared to the light at dot 1415. Now when light with polarization parallel and perpendicular to the optical axis of the Savart plate is analyzed by a polarizer along axis 1435, the components add in phase and maximum light is transmitted. Scenario "1 " describes the situation when adjacent elements of a
Appendix A phase modulator are viewed through a Savart plate sandwiched by crossed polarizers and the elements emit light out of phase.
[072] Placing a Savart plate sandwiched by crossed polarizers at the object plane (or between the modulator and the lens) is one way to construct a phase discriminator having an impulse response h{x) = Here, p is the distance
between positive and negative sine functions analogous to ideal delta functions 1 120 and 1125 in Fig. 1 1A. x0 determines the width (first zero crossing) of the sine functions; delta functions are obtained when x0 goes to zero, (if the polarizers are parallel, rather than crossed, the impulse response becomes h(x) = + ~sinc (~ (x + )■ Each of the four
discriminators summarized in Figs. 10B, 12 and 16 can produce either of these impulse response functions.
[073] The Fourier transform, H(k), of h(x) is a sine function that is cut off at +/- (λ/χο). When the sine function is cut off at +/- Wp), as plotted at plane 1 1 15 in Fig. 1 1 A, pixel intensities in a corresponding line image are proportional to sin (x - a) where ' represents the position of a particular pixel along the length of a linear array. If the sine function were cut off farther away from the optical axis than +/- (λ/ρ), the pixels in a line image would become square rather than have a sin2(x) spatial intensity profile.
Appendix A
[074] Figs. 15A and 15B show an exemplary modulator - discriminator arrangement. The arrangement of Figs. 15 is the off-axis arrangement of Fig. 4A using the discriminator elements of Fig. 12B with crossed polarizers for sine detection. A Savart plate is placed between a phase modulator and a lens such as lens 1260 of Fig. 12B (not shown in Figs. 15).
[075] In Fig. 15A input light beam 1505 is modulated by reflective phase modulator 1510 to form output light beam 1515. Steering mirrors 1520 and 1522 direct the light beams to and from the modulator at near normal incidence to the modulator. Polarizers 1530 and 1535 are oriented with their polarization axes perpendicular to one another and at 45° to the optical axis of Savart plate 1540. Polarizer 1530 is not necessary if input light beam 1505 is already polarized as is the case with laser light sources, for example.
[076] Fig. 15B shows the same arrangement as Fig. 15A with the addition of compensating plate 1545. This plate, which may be another Savart plate, is not necessary to achieve any of the optical phase discrimination functions of the optical system described herein. However, the plate may be useful for reducing second order effects that can appear for off-axis light.
[077] Figs. 16A and 16B show exemplary optical systems for unpolarized light. These are the apodizing filter and thick hologram respectively mentioned in connection with Figs. 10. Fig. 10A shows that multiplication of Fourier transforms of functions in the Fourier plane is equivalent to convolution of those functions in the object plane. In system of Fig. 16A the Fourier transform of the electric field profile of light coming from a linear array phase modulator is multiplied by a filter
Appendix A having a sinusoidally varying optical density. This is equivalent to the
convolution in the object plane of close-spaced positive and negative delta (or sine, for practical cases) functions with the phase differences presented by modulator elements.
[078] In Fig. 16A line segments 1605 represent elements of a linear array optical phase modulator. Lens 1610 is placed one focal length away from, and between, the phase modulator elements and an apodizing filter 1615. The filter has a sinusoidally varying optical density and a phase shift across half of its extent as described below. In Fig. 16B line segments 1655 represent elements of a linear array optical phase modulator. Lens 1660 is placed one focal length away from, and between, the phase modulator elements and Fourier plane 1665. A thick hologram 1670 is placed between the object plane (where the elements of the phase modulator lie) and the lens. Steven K. Case has pointed out ("Fourier processing in the object plane", Optics Letters, 4, 286 - 288, 1979) that Fourier processing that is normally carried out by placing masks in the Fourier plane of an optical system may also be carried out by placing a thick hologram in the object plane. He shows that the hologram is a linear filter that operates on the spatial frequency spectrum of the object wave. A high pass spatial frequency filter in the object plane is equivalent to differentiation in the Fourier plane. The relationship of Fig. 10A may be used to design an object plane hologram to create a desired Fourier plane mask.
[079] We now consider how to construct an apodizing filter such as filter 1615. Fig. 17 illustrates various functional relationships to aid understanding of Fourier-
Appendix A plane discriminators. Fig. 17 shows four panels, a - d. Panel (a) shows the functional form of a sinusoidal electric field filter that, if placed in the Fourier plane of a system like that shown in Fig. 16A, would provide edge sampling behavior in the object plane of the system. Panel (b) shows the intensity profile corresponding to the field profile of panel (a); i.e. the function plotted in (b) is the square of that plotted in (a). Panel (c) compares the sinusoidal form of panel (a) (solid line) with the square root of the intensity profile of panel (b) (dotted line). In order to match the square root of intensity (c) to the sinusoidal form (a) an optical filter can be constructed from a plate having an optical density profile (b) combined with a phase shift as shown in (d). Profile (d) may be obtained by adding an extra thickness of glass to one half of a filter, for example. Fig. 17 shows functional relationships for constructing an apodizing filter for sine (phase difference) discrimination. This concept has been applied to astronomical observations by Oti, et al. ("The Optical Differentiation Coronagraph",
Astrophysical Journal, 630, 631 - 636, 2005).
[080] Similar concepts may be used to construct a filter for cosine (phase similarity) discrimination as shown in Fig. 18 which also illustrates various functional relationships to aid understanding of Fourier-plane discriminators. Fig. 18 shows four panels, a - d. Panel (a) shows the functional form of a sinusoidal electric field filter that, if placed in the Fourier plane of a system like that shown in Fig. 16A would provide edge sampling behavior in the object plane of the system. Panel (b) shows the intensity profile corresponding to the field profile of panel (a); i.e. the function plotted in (b) is the square of that plotted in (a). Panel (c)
Appendix A compares the sinusoidal form of panel (a) (solid line) with the square root of the intensity profile of panel (b) (dotted line). In order to match the square root of intensity (c) to the sinusoidal form (a) an optical filter can be constructed from a plate having an optical density profile (b) combined with a phase shift as shown in (d). Profile (d) may be obtained by adding an extra thickness of glass to part of a filter, for example. Fig. 18 shows functional relationships for constructing an apodizing filter for cosine (phase similarity) discrimination.
[081] Optical discriminators for converting a phase profile presented by a linear array phase modulator into an intensity profile forming a line image have been described for both polarized and unpolarized light. Further, discriminators can be designed with optical components placed in the object plane or the Fourier plane of an optical system.
[082] Advanced techniques
[083] Interleaving line images, using two-dimensional arrays and using phase difference discriminators for other applications are examples of advanced techniques that are extensions of the principles described so far.
[084] The display systems described so far produce (scanned) line images having the same number of pixels as modulator elements as the pixels are the result of phase differences created between adjacent elements. Interleaving techniques may be used to increase the number of pixels produced to twice the number of modulator elements.
[085] Fig. 19 shows the effect of shifting the output of a linear array on alternate scans. In Fig. 19, squares such as 1905, 1915, 1925, 1935 represent elements
Appendix A of a linear array phase modulator. Differences in the phase of light originating from these elements are converted to pixel intensity plotted schematically on graph 1950 as solid lines. Pixel intensity is proportional to sin2(A<p/2) where Αφ is the phase difference between array elements. Pixel intensity versus position along a line image such as that represented by graph 1950 is proportional to sin2(x - a) where 'a represents the position of a particular pixel along the length of a linear array. If the elements of a phase modulator array are shifted by one half the element period, as represented by dashed squares 1910, 1920, 1930, 1940, for example, the corresponding pixels are interleaved with pixels created by an unshifted array. Intensities for these pixels are plotted on graph 1950 as dashed lines; they are proportional to cos2(x - a). Linear array optical phase modulators coupled with discriminators described herein can therefore create line images which may be smoothly interleaved. These images are composed of twice as many pixels as modulator elements.
[086] Fig. 20 shows one possible way to interleave successive scans of a linear array display system. In Fig. 20 scanner 2030 alternately sweeps line images 2035 and 2045 across surface 2040 for viewing or printing. Line images 2035 and 2045 may be interleaved by slightly tilting scanner 2030 parallel to the line images as indicated by arrow 2050. Other methods for interleaving include moving a linear phase modulator slightly between alternate positions for alternating line images. This may be accomplished, for example, by jogging a phase modulator with a piezo actuator.
Appendix A
[087] So far the systems and methods described have been predicated on linear array optical phase modulators. However these systems and methods may be extended to two-dimensional phase modulators as well. A two-dimensional modulator may be considered to be an array of linear arrays. Phase difference (or phase similarity) discrimination may be applied along one dimension of a two- dimensional array while the other dimension of the array eliminates the need to scan a line image.
[088] Fig. 21 illustrates how finite differencing may be applied to two- dimensional phase modulator arrays. In Fig. 21 , squares 2105, 2115, 2125, 2135, etc., and 2106, 2107, 2108, efc., and 2116, 2117, 2118, efc, and 2126, 2127, 2128, etc. represent elements of a two-dimensional phase modulator array. An example of such an array is a liquid crystal phase modulator. When an optical phase difference discriminator is used to detect phase differences between elements along lines 2150, 2152, 2154, etc., a two-dimensional image is formed. Representative pixel intensities from such an image are plotted on axes 2151 , 2153, 2155, efc. in the figure.
[089] Figs. 22A and 22B show experimentally obtained images of light output from linear array phase modulators. Experiments were performed with MEMS ribbon modulators having 3.7 μπι pitch and enough travel (see, e.g. Δζ in Fig. 3B) to create π phase shifts. Individual ribbon control allowed arbitrary patterns to be generated with a simple electronics system. In the figures, white bars are drawn superimposed on the measured light output to indicate the relative phase of light modulator elements corresponding to the pixels in the line images.
Appendix A
[090] The optical discriminators described herein, especially in Figs. 10, 11 , 12, and 16, may be used for purposes other than display, printing and lithography. Such discriminators may be used for optical data storage readout, for example. Bits stored on optical discs (compact discs, video discs, etc.) are read out by optical systems that detect differences in light reflected from pits or lands on the disc. Phase difference discriminators described here may be used to read out several parallel channels of data from an optical disc, for example.
[091] Display systems based on detecting phase differences between adjacent elements of a phase modulator have been described. These systems create as many pixels in a displayed image as elements in the phase modulator. In some cases twice as many pixels are created using interleaving techniques.
[092] As one skilled in the art will readily appreciate from the disclosure of the embodiments herein, processes, machines, manufacture, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, means, methods, or steps.
[093] The above description of illustrated embodiments of the systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise form disclosed. While specific embodiments of, and examples for, the systems and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems and
Appendix A methods, as those skilled in the relevant art will recognize. The teachings of the systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above.
[094] In general, in the following claims, the terms used should not be construed to limit the systems and methods to the specific embodiments disclosed in the specification and the claims, but should be construed to include all systems that operate under the claims. Accordingly, the systems and methods are not limited by the disclosure, but instead the scope of the systems and methods are to be determined entirely by the claims.
Appendix A
What is claimed is:
1. A display comprising:
a light source;
a phase modulator comprising a linear array of elements that modulate the phase of light generated by the light source;
an optical phase discriminator that converts phase differences between light from adjacent modulator elements into pixels of a line image; and, a scanner that scans the line image across a screen to create a two- dimensional image.
2. The display of Claim 1 wherein the light source is a laser.
3. The display of Claim 1 wherein the light source is a light emitting diode.
4. The display of Claim 1 wherein the light source is an arc lamp.
5. The display of Claim 1 wherein the elements of the phase modulator are
reflective.
6. The display of Claim 5 wherein the elements of the phase modulator are
micro-electromechanical ribbons.
7. The display of Claim 1 wherein the elements of the phase modulator are
transmissive.
8. The display of Claim 7 wherein the elements of the phase modulator are
liquid crystal modulators.
9. The display of Claim 1 wherein the phase modulator lies in an object plane of the phase discriminator and the phase discriminator comprises a Savart plate.
Appendix A
10. The display of Claim 1 wherein the phase modulator lies in an object plane of the phase discriminator and the phase discriminator comprises a Wollaston prism, the Wollaston prism operating in a Fourier plane of the phase discriminator.
11.The display of Claim 1 wherein the phase modulator lies in an object plane of the phase discriminator and the phase discriminator comprises a thick hologram.
12. The display of Claim 1 wherein the phase modulator lies in an object plane of the phase discriminator and the phase discriminator comprises an apodizing filter, the apodizing filter operating in a Fourier plane of the phase
discriminator.
13. The display of Claim 1 wherein the phase discriminator has an object plane impulse response h x) = where p
is a spatial period of elements in the phase modulator and xo is a constant. 14. The display of Claim 1 wherein the phase discriminator has an object plane impulse response ftQc) = — sine where p
is a spatial period of elements in the phase modulator and xo is a constant.
15. The display of Claim 1 wherein the scanner is an oscillating mirror.
16. The display of Claim 1 wherein the scanner is a rotating prism.
17. The display of Claim 1 wherein the number of pixels in the line image is the same as the number of modulator elements.
Appendix A
18. The display of Claim 1 wherein bright pixels in the line image correspond to non-zero phase differences between adjacent modulator elements.
19. The display of Claim 1 wherein dark pixels in the line image correspond to non-zero phase differences between adjacent modulator elements.
20. The display of Claim 1 wherein line images produced by successive scans of the scanner are interleaved by offsetting them parallel to the line images.
21. An optical system comprising:
a lens having an object plane and a Fourier plane;
a phase modulator located in the object plane, the modulator comprising a linear array of elements having spatial period, p, where each element is independently capable of changing the phase of light incident upon it;
optical components that sample the phase of light coming from the elements of the phase modulator, the components having an impulse
array and x0 is a constant.
22. The system of Claim 21 wherein the optical components comprise a Savart plate located between the phase modulator and the lens.
23. The system of Claim 21 wherein the optical components comprise a
Wollaston prism located in the Fourier plane.
Appendix A
06/11/2009
24. The system of Claim 21 wherein the optical components comprise a thick hologram located between the phase modulator and the lens.
25. The system of Claim 21 wherein the optical components comprise an
apodizing filter located in the Fourier plane.
26. The system of Claim 21 wherein the optical components have an impulse response in the Fourier plane that is a single-period sine function cut off at
(+) ^ where λ is the wavelength of light.
27. An optical system comprising:
a lens having an object plane and a Fourier plane;
a phase modulator located in the object plane, the modulator comprising a linear array of elements having spatial period, p, where each element is independently capable of changing the phase of light incident upon it;
optical components that sample the phase of light coming from the elements of the phase modulator, the components having an impulse
array and xo is a constant.
28. The system of Claim 27 wherein the optical components comprise a Savart plate located between the phase modulator and the lens.
29. The system of Claim 27 wherein the optical components comprise a
Wollaston prism located in the Fourier plane.
Appendix A
30. The system of Claim 27 wherein the optical components comprise a thick hologram located between the phase modulator and the lens.
31.The system of Claim 27 wherein the optical components comprise an
apodizing filter located in the Fourier plane.
32. The system of Claim 27 wherein the optical components have an impulse response in the Fourier plane that is a single-period cosine function cut off at ±) ^ where λ is the wavelength of light.
33. An optical system comprising:
a linear array phase modulator capable of creating a phase step profile along its length; and,
an optical phase discriminator that converts the phase step profile into a line image comprising bright and dark pixels; wherein,
pixels in the line image are mapped in the phase profile with non-return-to- zero, inverted coding.
34. A display comprising:
a light source;
a phase modulator comprising a two-dimensional array of elements that modulate the phase of light generated by the light source; and, an optical phase discriminator that converts phase differences between light from modulator elements into pixels of a two-dimensional image.
35. The display of Claim 34 wherein the two-dimensional array comprises an array of linear arrays and the phase differences are converted into pixel intensities via phase difference discrimination.
Appendix A
36. The display of Claim 34 wherein the two-dimensional array comprises an array of linear arrays and the phase differences are converted into pixel intensities via phase similarity discrimination.
37. The display of Claim 34 wherein the phase modulator is a liquid crystal phase modulator.
Appendix A
Abstract of the Disclosure
A display system is based on a linear array phase modulator and a phase edge discriminator optical system.
Appendix A
Light source
Phase modulator
Optical system
115
Line scanner
120
Fig. 1
53
Appendix A
Fig. 2
Appendix A
Fig. 3
Appendix A
modulo 2 add
(XOR) 620
Fig. 6
0 0 0 0
b k-l
modulo 2 add
(XOR)
Fig.7
59
Appendix A
† 0 0 0 a a 0 0 0
Fig. 8
Fig. 9
61
Appendix A
convolution in multiplication in object plane Fourier plane
B
Fig. 10
62
Appendix A
A /
B /
Fig. 11
63
Appendix A
Fig. 12
64
Appendix A
Fig. 13
65
Appendix A
Fig. 15
67
Appendix A
apodizing
thick
/
Fig. 16
68
Appendix A
Fig. 17
Appendix A
Appendix A
-> Δφ
\
1950
Fig. 19
71
Appendix A
Appendix A
Appendix A
Fig. 22
Appendix B (US SN 13/657,530)
Low-voltage drive for MEMS ribbon-array light modulators
Technical Field
[01] The disclosure is related to electronic drive systems for light modulators based on micro-electromechanical systems (MEMS) ribbon arrays.
Background
[02] MEMS ribbon arrays have proved useful in many different types of high speed light modulator. Some examples of different kinds of light modulators based on MEMS ribbon arrays are described in US patents 7,054,051, 7,277,216, 7,286,277, and 7,940,448.
[03] Fig. 1A is a conceptual drawing of a MEMS ribbon array 105. Ribbons, e.g. 110, 115, are mounted above a substrate 120. Application of a voltage between a ribbon and the substrate causes the ribbon to deflect toward the substrate; ribbon 110 is deflected, for example, while ribbon 115 is relaxed. Typical dimensions for a MEMS ribbon in an array are tens to hundreds of microns long, a few microns wide, and a fraction of a micron thick. Ribbons may be made from silicon nitride and coated with aluminum.
[04] MEMS ribbons can switch between relaxed and deflected states in as little as about ten nanoseconds. The corresponding high pixel switching speed means that a linear array of MEMS ribbons can do the job of a two-dimensional light modulator. A line image produced by a linear array modulator may be swept from side to side to paint a two dimensional scene.
Appendix B
[05] High-speed ribbons require high-speed electrical signals to drive them. Ribbons in a typical MEMS array need 10 to 15 volts potential difference from the substrate to deflect by a quarter optical wavelength. Switching 10 to 15 volts at hundreds of megahertz is a specialized task often requiring custom made electronic driver circuits. It would be more convenient if MEMS ribbon arrays could be driven by conventional high-speed digital electronic circuits. Tighter integration between MEMS and CMOS circuits, for example, could lead to a MEMS linear array being considered as an optical output stage for an integrated circuit.
Brief Description of the Drawings
[06] Fig. 1A is a conceptual drawing of a MEMS ribbon array.
[07] Fig. IB is a simplified diagram showing a signal voltage supply and a bias voltage supply connected to a MEMS ribbon.
[08] Fig. 1C is a conceptual block diagram of a MEMS light modulator including a MEMS ribbon array and associated optics, a control signal voltage supply, and a bias voltage supply.
[09] Fig. 2 illustrates displacement versus voltage behavior for a MEMS ribbon such as that shown in Fig IB.
[10] Fig. 3 is a graph of pixel intensity versus voltage for a MEMS light modulator.
[11] Fig. 4A is a conceptual diagram of ribbon displacements in an unbiased MEMS ribbon array.
[12] Figs. 4B - 4D are graphs of displacement versus voltage, pixel intensity versus displacement, and pixel intensity versus voltage, respectively, for an unbiased MEMS-ribbon- array-based light modulator.
Appendix B
[13] Fig. 5A is a conceptual diagram of ribbon displacements in a biased MEMS ribbon array.
[14] Figs. 5B - 5C are graphs of displacement versus voltage and pixel intensity versus voltage, respectively, for a biased MEMS-ribbon-array-based light modulator.
[15] Fig. 6 is a graph of pixel intensity versus voltage for biased and unbiased MEMS-ribbon- array-based light modulators scaled such that the voltage required to obtain maximum pixel intensity is normalized to one in each case.
[16] Fig. 7A is a simplified schematic diagram for a ribbon drive scheme.
[17] Fig. 7B is a conceptual diagram of ribbon displacements in a MEMS ribbon array.
[18] Fig. 8 is a timing diagram for sequential frames in a conventional pseudobipolar ribbon drive scheme.
[19] Fig. 9 is a timing diagram for sequential frames in a pseudobipolar ribbon drive scheme using a bias voltage and a signal voltage.
[20] Fig. 10 is a graph of measured pixel intensity versus voltage for biased and unbiased MEMS-ribbon-array-based light modulators.
[21] Fig. 11 is a graph of measured pixel intensity versus voltage for biased and unbiased MEMS-ribbon-array-based light modulators scaled such that the voltage required to obtain maximum pixel intensity is normalized to one in each case.
Detailed Description
[22] Low-voltage drive systems for MEMS ribbon-array light modulators are based on a DC bias voltage and a low-voltage ribbon control signal added in series to take advantage of ribbon nonlinear displacement characteristics. Fig. IB provides a simplified diagram of this
Appendix B arrangement. In Fig. IB, MEMS ribbon 125 is suspended above substrate 130. Control signal voltage SIG 135 and bias voltage VB|As 140 are connected in series to apply a combined voltage between the ribbon and the substrate. A potential difference between ribbon and substrate causes the ribbon to deflect toward the substrate by an amount Δζ as indicated by the dashed curve.
[23] Fig. IB shows only one ribbon connected to the signal and bias voltage sources. In a typical MEMS ribbon array light modulator, all ribbons are connected to a bias voltage source. A subset of the ribbons, often every second ribbon in an array for example, is connected to a signal voltage source in series with the bias voltage source. Ribbons in this subset may be called "active" ribbons while those that are connected to the bias voltage, but not the signal voltage may be called "static" ribbons. The active ribbons deflect independently of one another. This happens because the signal voltage applied to each active ribbon may be the same as or different from that applied to any other. In effect, each active ribbon has its own signal voltage source. In practice, one signal voltage source may drive multiple ribbons via multiplexing techniques. Hence the term "signal voltage source" means one that can supply different signal voltages to different ribbons.
[24] Fig. 1C is a conceptual block diagram of a MEMS light modulator including a MEMS ribbon array and associated optics, a control signal voltage supply, and a bias voltage supply. Optical details such as focusing lenses, phase discriminators, beam splitters, scan mirrors, and so on, are omitted in the figure for clarity. The modulator includes a MEMS ribbon array driven by a bias voltage and a control signal voltage in series. Modulators like that of Fig. 1C may be
Appendix B used in projection displays, microdisplays, printers and 3D depth-capture systems among other applications.
[25] A closer look at MEMS ribbon behavior helps explain how the series connected bias and control signal voltage supplies of Fig. IB and 1C lead to low-voltage operation of MEMS ribbon arrays. In particular. Fig. 2 illustrates displacement (i.e. Δζ) versus voltage behavior for a MEMS ribbon such as that shown in Fig IB. In Fig. 2, displacement is normalized to the distance between ribbon and substrate in a relaxed state with no applied voltage. In other words, when the displacement is equal to one, the ribbon touches the substrate. The units of voltage are arbitrary.
[26] The dashed line plots ribbon displacement as calculated according to a model in which a ribbon is treated electrically as a capacitor and mechanically as a spring. When the ribbon displacement reaches 1/3 the system becomes unstable and the ribbon snaps down to the substrate. Said another way, the slope of the displacement versus voltage curve becomes infinite at that point. To avoid snap down, most MEMS ribbon array devices are operated at voltages less than the snap down voltage.
[27] The solid line plots an approximation to the actual ribbon displacement curve. In this approximation, displacement is proportional to voltage squared. It is apparent that the V2 approximation underestimates the actual displacement; however, it is reasonably accurate away from the snap down voltage.
[28] Both the capacitor - spring model and the V2 approximation to it exhibit nonlinear behavior: the additional voltage required to deflect a ribbon one unit decreases as voltage increases. Fig. 3 illustrates the implications of this behavior for an optical modulator based on a
Appendix B
MEMS ribbon array. Fig. 3 is a graph of pixel intensity versus voltage where pixel intensity is proportional to the sine squared of ribbon displacement, as is the case for many types of MEMS ribbon light modulators. Pixel intensity is normalized to its maximum value while the voltage scale is typical of a MEMS ribbon array device. In Fig. 3, the voltage AVi required to deflect a ribbon a quarter optical wavelength (λ/4) is 10 V. This leads to pixel intensity
/ « sin
2 — 1· The additional voltage Δν
2 required to deflect the ribbon from A/4 to
λ/2 displacement, where / = 0 is only about 4 V.
[29] The utility of the nonlinear dependence of pixel intensity on voltage is now described in further detail. To set a baseline for comparison, first consider ribbon behavior with zero bias voltage. Fig. 4A is a conceptual diagram of ribbon displacements in an unbiased MEMS ribbon array. In Fig. 4A, ribbons 405 and 410 (viewed in cross section) are stationary at a height Z0 above a substrate. Ribbons 415 and 420 (also viewed in cross section) deflect toward a substrate under the influence of an applied voltage.
[30] Figs. 4B - 4D are graphs of displacement versus voltage, pixel intensity versus displacement, and pixel intensity versus voltage, respectively, for an unbiased MEMS-ribbon- array-based light modulator. Fig. 4B is a graph of ribbon displacement versus voltage using the approximation that displacement is proportional to voltage squared. Fig. 4C is a graph of pixel intensity versus displacement where pixel intensity is proportional to the sine squared of the displacement. Finally, Fig. 4D is a graph of pixel intensity versus voltage where pixel intensity is proportional to the sine squared of the voltage squared.
[31] The graphs of Figs. 4B - 4D are normalized such that a unit voltage leads to a unit displacement, and a unit displacement leads to unit pixel intensity. The unit voltage, and the
Appendix B displacement which it produces, are labeled "VQW" since a (Z - Z0) = λ/4 ("quarter wave") ribbon displacement leads to maximum pixel intensity in a typical ribbon-based light modulator. Figs. 4A, 4B and 4D may now be compared with Figs. 5A, 5B and 5C which illustrate the effects of biasing.
[32] Fig. 5A is a conceptual diagram of ribbon displacements in a biased MEMS ribbon array. In Fig. 5A, ribbons 505 and 510 (viewed in cross section) are under the influence of a DC bias voltage that deflects them from height Z0 to height Zi above a substrate. Ribbons 415 and 420 (also viewed in cross section) are biased by the same voltage and they deflect from height further toward a substrate under the influence of an additional signal voltage.
[33] Fig. 5B is a graph of displacement versus voltage for ribbons 515 and 520. In Fig. 5B the displacement scale represents additional displacement starting from biased height 1τ and the voltage scale represents signal voltage added to the DC bias voltage. The displacement scale is normalized such that unit displacement corresponds to a ribbon movement of λ/4. The voltage scale is normalized such that unit voltage in Fig. 5B has the same magnitude as unit voltage in Fig. 4B.
[34] The DC bias voltage that deflects ribbons to height Zi in Fig. 5A and influences the results presented in Figs. 5B and 5C has unit magnitude on the scale of the graphs of Figs. 4B, 4D, 5B and 5C. In this example the bias voltage is VQW; i.e. (Zi - Z0) = λ/4. Fig. 5B shows that the magnitude of the signal voltage (added in series to the bias voltage) required to obtain an additional λ/4 ribbon displacement is about 0.41. This is 59% less than the voltage required to obtain the first λ/4 ribbon displacement from Z0 to Zi.
Appendix B
[35] Fig. 5C is a graph of pixel intensity versus voltage where pixel intensity is proportional to the sine squared of the voltage squared. The pixel intensity scale is normalized such that unit pixel intensity is the maximum. The voltage scale represents signal voltage added to the DC bias voltage and is normalized in the same way as in Fig. 5B. Fig. 5C shows that the magnitude of the signal voltage (added in series to the bias voltage) required to obtain maximum pixel intensity is about 0.41. This is 59% less than the voltage required to obtain maximum pixel intensity with unbiased ribbons such as those of Fig. 4A.
[36] The bias voltage chosen for the example of Figs. 5A - 5C is just one possibility. Lower bias voltage leads to less dramatic reduction in additional signal voltage necessary to deflect a ribbon by a specified amount. Greater bias voltage leads to even greater ribbon deflection sensitivity to additional signal voltage. If the bias voltage is too great, however, snap down may occur.
[37] The nonlinear characteristics of ribbon displacement versus applied voltage allow a light modulator designer to pick a bias voltage such that a suitable range of control signal voltage leads to full light modulation from dark to bright. An intuitive, although not complete, explanation for the origin of these nonlinear characteristics may be obtained by noting that the force on charged object, such as a MEMS ribbon, is given by F = qE where q is electric charge and E is electric field. The charge q stored on a ribbon is given by q = CV where C is the capacitance of the ribbon - substrate system and V is the voltage between them. The electric field E is given by V/d where V is the voltage between ribbon and substrate and d is the separation between them. Thus force F depends on V twice; i.e. it depends on V2. Loosely speaking, the effect of a bias voltage may be thought of as storing charge on a ribbon and
Appendix B supplying one factor of V. A signal voltage then supplies the additional factor of V. This means that when a bias voltage is present, ribbon displacement responds to an added signal voltage approximately linearly.
[38] If ribbon displacement were actually a linear function of V, then pixel intensity would be proportional to sin2(V) rather than sin2(V2). Ribbon behavior that approaches this extreme is illustrated in Fig. 6. Fig. 6 is a graph of pixel intensity versus voltage for biased and unbiased EMS-ribbon-array-based light modulators scaled such that the voltage required to obtain maximum pixel intensity is normalized to one in each case. In other words, Fig. 6 shows the curves of Fig. 4D (dashed) and Fig. 5C (solid) on the same graph, but the voltages of Fig. 5C have been scaled such that the additional signal voltage needed to reach maximum pixel intensity is a unit voltage instead of roughly 0.41. The rescaling illustrates the functional form of the intensity versus voltage curve over the full range from dark to bright.
[39] The dotted curve in Fig. 6 is the pixel intensity function sin2(V) that would be obtained in an optical modulator if ribbon displacement were a linear function of V. In can be seen in Fig. 6 that when ribbons are biased, their response to control signal voltages more closely approximates this linear case. When a digital to analog converter (DAC) is used to generate ribbon control signals, a pixel intensity function like the dotted (sin (V)) or solid (biased) curves in Fig. 6 is preferred over the dashed (no bias) curve because the full range of pixel intensities is distributed more evenly across the DAC's set of accessible discrete output voltages.
[40] It was pointed out in "Pseudo Bipolar MEMS Ribbon Drive" (US 12/910,072 by Yeh and Bloom, filed 10/22/2010) that potentially deleterious ribbon charging effects can be avoided, even using a unipolar power supply, by changing the direction of the electric field between
Appendix B ribbon and substrate periodically. Often the change is made during alternate frames of video information supplied to a MEMS light modulator. The methods disclosed in that application rely on the property that the force experienced by a ribbon under voltage V is the same as the force experienced under voltage -V. (Note that the term "bias ribbon" used in US 12/910,072 refers to a different concept than, and should not be confused with, bias voltages or biased ribbons discussed herein.)
[41] Fig. 7A is a simplified schematic diagram for a ribbon drive scheme. In Fig. 7A capacitor 705 represents the electrical properties of active ribbons in a MEMS ribbon array while capacitor 710 represents the electrical properties of static ribbons in the array. Active ribbons are those that change deflection to create different pixel intensities while static ribbons do not. Voltage sources 715 and 725 are connected to active and static ribbons, respectively. Voltage source 720 is connected to the ribbons' common substrate. Voltages produced by sources 715, 720 and 725 are VACTV (active ribbons), VSUB (substrate) and VSTAT (static ribbons), respectively. These voltage sources may include series-connected bias and signal voltages when voltage biasing is used as described below.
[42] In analogy to Figs. 4A and 5A, Fig. 7B is a conceptual diagram of ribbon displacements in a MEMS ribbon array. Static ribbons remain at displacement Z2 while active ribbons' displacement changes depending on voltages applied to them. Displacement Z2 is not necessarily a ribbon's relaxed state.
[43] Fig. 8 is a timing diagram for sequential frames in a pseudobipolar ribbon drive scheme that does not take advantage of ribbon biasing techniques to achieve low-voltage operation. (See Yeh and Bloom for more detailed discussion.) In the timing diagram, frames 1, 2 and 3
Appendix B represent times during which a ribbon array is configured for a set of line images. During frames 1 and 3, static ribbons and the substrate remain at ground while active ribbons are controlled by voltages up to VMAX which is the voltage required for maximum pixel intensity. As mentioned above, this voltage may be as much as about 15 V for a typical MEMS ribbon array. During frame 2, active ribbons remain at ground while static ribbons and substrate vary between 0 and VMAx- The E field switches direction from frame to frame and prevents charge build up, but without biasing, control voltages are still in 10 - 15 V range.
[44] Fig. 9 is a timing diagram for sequential frames in a pseudobipolar ribbon drive scheme using a bias voltage and a signal voltage. Here active ribbons vary between 0 and VS|G and static ribbons remain at VSIG or 0 during each frame. The substrate varies over a range from - (VMAX - VS|G) to VMAX from frame to frame. In practice VS|G may be about 5 V or less while VMAX may be about 10 to 15 V. Biasing the substrate reduces the magnitude of control voltages required to deflect active ribbons. Although control voltages change with each column of video data, bias voltages only change from frame to frame. Thus the bias voltage is considered "DC" compared to the signal voltages which may change thousands of times more rapidly.
[45] Experiments were performed with MEMS ribbon arrays to verify ribbon control with reduced signal voltages. The ribbons in the arrays were made of silicon nitride coated with aluminum and were approximately 250 pm long, 4 pm wide and 0.15 pm thick. They were separated from a conductive substrate by an approximately 0.45 pm air gap and an insulating silicon dioxide layer that was about 2.75 pm thick. (Each of these dimensions may vary by as much as plus or minus 50% without departing from the concept of a MEMS ribbon array.) Figs. 10 and 11 illustrate results of the experiments.
Appendix B
[46] Fig. 10 is a graph of measured pixel intensity versus voltage for biased and unbiased MEMS-ribbon-array-based light modulators. In Fig. 10 curve 1010 represents data for conventional ribbon drive with no bias voltage. Curve 1005 represents data for an array that is driven by a control signal that is in series with a 10 V bias voltage supply. It is apparent from the figure that the control signal voltage required to deflect a ribbon λ/4 beyond its biased position (about 4 V) is significantly less than the voltage required to deflect an unbiased ribbon by λ/4 (about 11 V).
[47] Fig. 11 is a graph of measured pixel intensity versus voltage for biased and unbiased MEMS-ribbon-array-based light modulators scaled such that the voltage required to obtain maximum pixel intensity is normalized to one in each case. (This figure is analogous to Fig. 6) In Fig. 11 curves 1105 and 1110 represent the same data as curves 1005 and 1010 in Fig. 10. In Fig. 11, however, the voltage scale for each curve has been normalized such that a unit voltage produces maximum pixel intensity. It is apparent from the figure that a biased ribbon provides a more linear pixel intensity response than an unbiased ribbon. This reduces the need for a high resolution DAC to access brightness levels evenly from bright to dark.
[48] As described above, ribbon nonlinear deflection characteristics may be used to reduce control signal voltages necessary to operate them. These techniques may be extended in several ways. For example, the best optical contrast is obtained from MEMS ribbon arrays that are operated in common mode; i.e. all ribbons are biased by the same bias voltage and alternate ("active") ribbons are controlled by signal voltages added in series with the bias. However, MEMS ribbon arrays do not have to be operated this way. For example, fixed
Appendix B
("static") ribbons may be fabricated such that their height above a substrate is different from active ribbons height even when all ribbons are in a relaxed, zero voltage state.
[49] Second, control signal voltages added to a bias voltage need not be positive or even unipolar. Active ribbons may be biased to an intermediate brightness state and then deflected through their entire bright to dark range by an AC control signal voltage.
[50] Finally, a transparent electrode may placed above a ribbon and connected electrically to the substrate. A bias voltage applied between the ribbon and the electrode - substrate then has the effect of increasing the ribbon's sensitivity to control signal voltages without deflecting it at all.
[51] The common theme in all of these techniques is the use of a DC bias voltage to increase a MEMS ribbon's sensitivity to applied voltages and thus permit lower signal voltage operation.
[52] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Appendix B
What is claimed is:
1. A MEMS light modulator comprising:
an array of MEMS ribbons suspended above a substrate;
a bias voltage source; and,
a signal voltage source;
the bias voltage source, signal voltage source, and ribbons in the array being connected in series to create an overall potential difference between the ribbons and the substrate such that ribbons' deflection sensitivity to signal voltages is increased when the bias voltage is greater than zero.
2. The modulator of Claim 1, the bias voltage source being connected to each ribbon in the array and the signal voltage source being connected in series with the bias voltage source to a subset of the ribbons in the array.
3. The modulator of Claim 2, the subset being every second ribbon in the array.
4. The modulator of Claim 1, the bias and signal voltage sources changing sign such that the direction of an electric field between ribbons and substrate alternates periodically.
5. The modulator of Claim 1, the length, width and thickness of each ribbon in the array being approximately 250 pm, 4 pm and 0.15 pm, respectively, and the ribbons being made from silicon nitride and a reflective metal coating.
6. A method for driving a MEMS ribbon-based light modulator comprising:
providing an array of MEMS ribbons suspended above a substrate, the array comprising active ribbons and static ribbons;
Appendix B applying a bias voltage between the each of the ribbons and the substrate to increase the ribbons deflection sensitivity to signal voltage;
adding a signal voltage in series with the bias voltage for active ribbons in the array; and, controlling deflection of the active ribbons by varying the signal voltage applied to each one.
7. The method of Claim 6, every second ribbon in the array being an active ribbon.
8. The method of Claim 6 further comprising:
alternating the direction of an electric field between ribbons and substrate by periodically changing the sign of the bias and signal voltages.
Appendix B
AL054 Bloom and Leone
10/22/2012
Abstract of the Disclosure
A series bias voltage increases the sensitivity of a MEMS ribbon to control signal voltages. This effect is obtained because of the nonlinear dependence of ribbon deflection on applied voltage. The resulting low-voltage operation of MEMS ribbons makes them more compatible with high speed electronics.
90
16
Appendix B
Fig. IB
Appendix B
LIGHT IN
MODULATED LIGHT OUT
Fig. 1C
92
Appendix B
0.1 0.2 03 0.
VOLTAGE
- CAPACITOR - SPRING MODEL -V SQUARED APPROXIMATION
Fig. 2
93
Appendix B
PIXEL INTENSITY VS. VOLTAGE
Fig. 3
Appendix B
Z 405 NO BIAS 410
415 420
X
Fig. 4A
DISPLACEMENT VS. VOLTAGE
0.0 0.5 1.0 1.5 2.0
NORMALIZED VOLTAGE
Fig. 4B
95
Appendix B
PIXEL INTENSITY VS. DISPLACEMENT
.0 0.5 1.0 1.5 2.0
NORMALIZED DISPLACEMENT
Fig. 4C
PIXEL INTENSITY VS. VOLTAGE
0.0 0.5 1.0 1.5 2.0
NORMALIZED VOLTAGE
Fig. 4D
96
Appendix B
vQW BIAS
505 510
515 520
X
Fig. 5A
DISPLACEMENT VS. VOLTAGE (WITH BIAS)
0.0 0.2 0.4 0.6 0.8 1.0
NORMALIZED VOLTAGE
Fig. 5B
97
Appendix B
PIXEL INTENSITY VS. VOLTAGE (WITH BIAS)
0.0 0.2 0.4 0.6 0.8 1.0
NORMALIZED VOLTAGE
Fig. 5C
98
Appendix B
PIXEL INTENSITY VS. SCALED VOLTAGE
NORMALIZED VOLTAGE
Fig. 6
99
Appendix B
RIBBON ARRAY ELECTRICAL MODEL
Fig. 7 A
Fig. 7B
100
Appendix B
ACTIVE MAX
STATIC AN D
MAX SUBSTRATE
Fig. 8 (PRIOR ART)
MEASURED PIXEL INTENSITY VS. VOLTAGE
VOLTAGE (V)
Fig. 10
Appendix B
MEASURED PIXEL INTENSITY VS. VOLTAGE
0.4 0.6 0.8 1 1.2 1.4
NORMALIZED VOLTAGE
103
Appendix C (US 8.368.984)
Pseudo Bipolar MEMS Ribbon Drive
Technical Field
[01] The disclosure is generally related to the field of electrical drive methods for microelectromechanical systems (MEMS) optical ribbon devices.
Background
[02] MEMS ribbon devices are used in several kinds of high speed light modulators including grating light valves, interferometric MEMS modulators, MEMS phased arrays, and MEMS optical phase modulators. Each of these light modulator technologies may be employed in personal display, projection display or printing applications, as examples.
[03] MEMS ribbons are made in a variety of shapes and sizes depending on the specific application for which they are designed; however, a typical ribbon may be roughly 50 - 350 microns long, 2 - 10 microns wide, and 0.1 - 0.3 microns thick. Ribbons are suspended roughly 0.2 - 0.5 microns apart from a substrate to which they may be attracted through the application of an electric field. Ribbons of these approximate dimensions are capable of moving between rest and deflected positions in as little as a few tens of nanoseconds.
Appendix C
[04] The high speed of MEMS ribbon devices has led to display designs in which a linear array of ribbons modulates a line image that is scanned across a viewing area. The ribbons move so fast that a linear array of them can create a sequence of line images to form a two-dimensional image without any perception of flicker by a human observer. Modulating light with linear, rather than two-dimensional, arrays also leads compact modulators that make efficient use of valuable silicon chip real estate.
[05] MEMS linear-array light modulators are thus attractive candidates for integration with CMOS manufacturing processes. A MEMS linear-array may even be considered to be an optical output stage for an integrated circuit. Many CMOS electronic driver chips operate with unipolar supply voltages, however, and unipolar drive does not always work well with ribbon devices. In extreme cases ribbons driven from a unipolar power supply fail to respond after just a few minutes of operation.
[06] What are needed, therefore, are robust methods to drive MEMS ribbon devices using unipolar power supplies so that ribbons and CMOS electronics can be tightly integrated.
Brief Description of the Drawings
[07] Fig. 1A is a cross sectional sketch of a MEMS ribbon and substrate.
[08] Fig. IB is an equivalent circuit for the structure shown in Fig. 1A.
[09] Figs. 2A and 2B illustrate the direction of an electric field between a ribbon and a substrate under different conditions.
Appendix C
[10] Fig. 3 shows graphs of voltages and fields in a pseudo bipolar, 50% discharge duty cycle drive scenario wit flyback time.
[11] Fig. 4 illustrates voltages in a pseudo bipolar drive scenario with less than 50% discharge duty cycle.
[12] Figs. 5A and 5B show charge test data.
Detailed Description
[13] Pseudo bipolar MEMS ribbon drive methods described below are designed to avoid difficulties that may otherwise arise when unipolar CMOS electronics are used to drive MEMS ribbon devices. MEMS ribbon devices are typically made using high temperature silicon semiconductor fabrication processes that include deposition of high-stress, stoichiometric silicon nitride (Si3N ). It is unusual to use high-stress layers in MEMS; however, in the case of a ribbon, the high tensile stress of stoichiometric silicon nitride is the source of tension that allows the ribbon to move quickly.
[14] Ribbons are attracted to a substrate when a voltage is applied between the two. The force exerted on the ribbon is proportional to the square of the electric field created. Because silicon nitride is an insulator, the gap between a ribbon and a silicon dioxide substrate layer has no conductor adjacent to it. Dielectrics on either side of the gap accumulate surface charges when a voltage is applied between the ribbon and the substrate. These surface charges change
Appendix C
the strength of the electric field in the gap and movement of the ribbon for a given applied voltage varies over time.
[15] Surface charges accumulate when voltages applied to a ribbon are always of the same sign. Simple drive circuits with unipolar power supplies contribute to this effect. However, because force is independent of the sign of the field, fields of opposite direction but equal magnitude create equal ribbon deflection. Therefore surface charge accumulation effects may be reduced by operating with fields pointing one direction (e.g. from ribbon to substrate) part of the time and the opposite direction at other times. These principles and details of pseudo bipolar MEMS ribbon drive methods are now discussed in detail in concert with the accompanying figures.
[16] Fig. 1A is a cross sectional sketch of a MEMS ribbon and substrate. In the Figure, high- stress, stoichiometric silicon nitride 105 is the structural layer in a MEMS ribbon. The ribbon is separated by a small gap from a silicon substrate 115 upon which a silicon dioxide layer 110 has been grown. Aluminum conductive layer 120 may be deposited on the nitride ribbon during back-end processing after high-temperature steps are complete. (Other processes may be used to make the same structure.) In one example structure, the ribbon is about 200 microns long and about 3 microns wide; the thicknesses of the layers are approximately: aluminum, 600 A; stoichiometric silicon nitride, 1500 A; and, silicon dioxide, 2 microns. The air gap between the nitride and oxide layers (previously filled by an amorphous silicon sacrificial layer) is about 0.4 microns. (These dimensions are provided only to offer a sense of the scale involved; they are not intended to be limiting.)
Appendix C
[17] Plus {+) and minus (-) signs in Fig. 1A, such as 125, 126, 127, 128, 129, 130, 131, and 132 indicate accumulation of electric charges in the structure. In particular, surface charges, such as 125, 126, 127, and 128 in the gap between ribbon and substrate, change the magnitude of the electric field that results from a potential difference between VR< applied to the aluminum layer via connection 140, and V5, applied to the silicon substrate via connection 145. When a unipolar drive circuit is used, VR is always greater than (or always less than) Vs. When a bipolar or pseiido bipolar drive circuit is used, the situation alternates between VR > Vs and VR < Vs.
[18] Fig. IB is an equivalent circuit for the structure shown in Fig. 1A. In Fig. IB, VR and Vs are voltages applied to the ribbon and substrate, respectively, as in Fig. 1A. Capacitors Ci, C2 and C3 represent the capacitances of the nitride layer, air gap and oxide layer, respectively. There are several high resistance current leakage paths represented by resistors in the circuit as follows: 1( leakage around the edges of nitride layer; R2, leakage across the air gap; R3, leakage from the aluminum layer to the oxide layer; R4, leakage along the surface of the oxide layer; and R5< leakage from the nitride layer to the silicon substrate. Other leakage paths, and effects due to trapped charges in dielectric layers, are possible and may result in accumulation of surface charges with signs opposite those illustrated in Fig. 1.
[19] In practice, it may be difficult to identify precise values for Ci through C3 and Rj through R5, but if the entire structure is considered to be a single parallel plate capacitor with one leakage resistance, then its charging time constant is T = Rieak ^ai - 'n one example structure, T~ 103seconds.
[20] Figs. 2A and 2B illustrate the direction of an electric field between a ribbon and a substrate under different conditions. In Figs. 2, a schematic cross section of a ribbon 205 is
Appendix C
shown near a substrate 210. In Fig. 2A, a voltage between the ribbon and the substrate has made the ribbon more positively charged than the substrate and the direction of the resulting electric field, E, is from ribbon to substrate. In Fig. 2B, the opposite is true: a voltage between the ribbon and the substrate has made the substrate more positively charged than the ribbon and the direction of the resulting electric field, E, is from substrate to ribbon. If the magnitude of E is the same, however, then the force proportional to E2 acting between the ribbon and the substrate is the same in both Fig. 2A and 2B.
[21] When a bipolar power supply is available, switching between the scenarios of Fig. 2A and Fig. 2B is a matter of connecting voltage sources of different polarity to the ribbon while the substrate remains grounded, as an example. When only a unipolar power supply is available, a similar effect may be obtained by controlling the potential of both the ribbon and the substrate rather than leaving the substrate always at ground. This mode of operation is called "pseudo bipolar".
[22] Fig. 3 shows graphs of voltages and fields in a pseudo bipolar, 50% discharge duty cycle drive scenario with flyback time. In Fig. 3, graph 305 shows ribbon voltage versus time, while graph 310 shows substrate voltage versus time. Graph 315 plots the strength and polarity of electric field between a ribbon and the substrate. Starting from the left hand side of the figure with time increasing to the right, voltage +V is applied to a ribbon for a duration . During this time the substrate voltage is zero and the electric field in the direction from the ribbon to the substrate is positive with magnitude E. Next, for a duration t2, voltages applied to the ribbon and substrate are both zero, as is the electric field between them. Next, voltage +V is applied to the substrate for a duration t During this time the ribbon voltage is zero and the electric
Appendix C
field in the direction from the ribbon to the substrate is negative with magnitude E. Next, for a duration t2, voltages applied to the ribbon and substrate are both +V, and the electric field between is zero. After that, the cycle repeats.
[23] In Fig. 3, times ti are those when a ribbon is deflected by electrostatic force proportional to the square of the electric field created between the ribbon and the substrate. During alternating tj times the direction of the electric field is opposite. This characteristic of the drive scheme reduces or eliminates the accumulation of surface charges in a ribbon device. The discharge duty cycle is 50% because the electric field points in each of two directions half the time. Time ti is referred to as a "frame" time; it is a time when image data determines which ribbons in an array are deflected and by what amount. In one example design, ti is about 14 ms. During times t2, the voltages applied to the ribbon arid the substrate are equal and therefore the electric field is zero and surface charges do not accumulate. Time t2 is referred to as a "flyback" time; it is a time when ribbons are undetected and scanning mirrors or other scanning mechanisms can return to their starting point. In one example design, t2 is about 3 ms.
[24] In Fig. 3 the frame data is simply maximum ribbon deflection for the entire frame time which leads to a rather boring, all white image. The data for an actual image would contain a complicated modulation pattern during the frame time. Fig. 3 illustrates the polarity of the ribbon deflection signals regardless of the complexity of the image data, however.
[25] If the image data were significantly different from one frame to the next, the drive scheme of Fig. 3 might still lead to charging effects. In practice, this is a small effect; however, it may be eliminated by displaying each image frame twice in succession: once with positive
Appendix C
ribbon and grounded substrate, once with grounded ribbon and positive substrate. This way the average electric field is always zero regardless of image data. The trade off is that the frame rate has doubled; however, MEMS ribbons move so fast that an increased frame rate can be accommodated depending on the number of pixels to be displayed.
[26] Fig. 4 illustrates voltages in a pseudo bipolar drive scenario with less than 50% discharge duty cycle. In Fig. 4 graph 405 shows substrate voltage versus time while graphs 410 and 415 show voltage versus time for two adjacent ribbons: a "bias" ribbon and an "active" ribbon, respectively. The bias ribbon 420, active ribbon 425 and substrate 430 are shown schematically at 440 during a video active time and at 445 during flyback blank time.
[27] Not all ribbon array devices use bias and active ribbons. When used, a bias ribbon takes the place of a fixed ribbon to provide a way to make fine, static adjustments to dark levels in a video display system. The bias ribbon stays still during video active time. Its movement during flyback blank time is a byproduct of the pseudo bipolar drive scheme described below.
[28] Starting from the left hand side of Fig. 4 with time increasing to the right, the substrate is equal to zero, voltage +V2 is applied to the bias ribbon, and voltage +V3 is applied to the active ribbon. This is the condition for a maximum brightness pixel during a video active time. Next, for a duration , the bias and active ribbons are at zero voltage. Within this flyback blank time , for a duration t5, voltage + i is applied to the substrate. Next, during video active time t3, the situation returns to positive voltages applied to bias and active ribbons with zero voltage applied to the substrate.
[29] During video active times t3, bias ribbon 420 is deflected slightly to calibrate a dark level while active ribbon 425 is deflected according to video data to be displayed. At 440, the active
Appendix C
ribbon is depicted at maximum deflection consistent with the application of maximum voltage +V3. During flyback blank times t4, bias and active ribbons are deflected the same amount ensuring a dark state. The direction of the electric field is opposite during flyback blank time compared to video active time, thus reducing surface charge accumulation. The time t5 during which a voltage is applied to the substrate is slightly shorter than the entire flyback blank time t4 to reduce the possibility of spurious light signals at the beginning or end of a frame. In one example design, t3 is about 14 ms, t4 is about 3 ms and t2 is about 2 ms. The discharge duty cycle is t5/ (t3 + t4) or about 12% in this case. (Discharge duty cycle is defined as the fraction of time during which the electric field points in one particular direction during a video active / flyback blank cycle. The discharge duty cycle is 50% or less by definition.)
[30] The pseudo bipolar drive scheme of Fig. 4 has provided good experimental results despite the discharge duty cycle being less than 50%. In some MEMS ribbon array devices lookup tables are used to remember how much voltage is required to deflect a ribbon by a desired amount. The pseudo bipolar drive scheme of Fig. 3 may require two such lookup tables; one for positive ribbon voltages and one for positive substrate voltages. In the pseudo bipolar drive scheme of Fig. 4, however, only one lookup table is required as the active ribbon always has a positive voltage applied to it during video active times.
[31] In some cases, the pseudo bipolar drive scheme of Fig. 3 may also be operated with only one lookup table by taking advantage of the properties of binary arithmetic. If ribbon deflection levels for a display are represented by an N-bit binary number, for example, then such levels for alternating polarity frames are related by subtraction from the binary representation of 2N - 1. As an example, if the voltage required to deflect a ribbon by a desired
Appendix C
amount during ribbon-positive, substrate-grounded operation is represented by {10101101}, then the corresponding voltage required to deflect the ribbon by the same amount during ribbon-grounded, substrate-positive operation is represented by {01010010}. The difference between the two frames may be determined by exchanging 1 for 0 and vice versa in the binary representations of ribbon deflection voltages.
[32] Prevention of charge accumulation in the pseudo bipolar drive scheme of Fig. 4 depends on the relationship between νΊ and V3. Usually, Vi is chosen to be the maximum voltage available on chip, e.g. the supply voltage, while V3 varies constantly with video content. In general, the greater the difference between i and V3, the shorter t5 can be while still preventing surface charge accumulation.
[33] Figs. 5A and 5B show charge test data. Fig. 5A shows data for a ribbon with a constant voltage applied to it with respect to a substrate. Fig. 5B shows data for ribbons driven according to 50% and 12% discharge duty cycle, pseudo bipolar drive schemes illustrated in Figs. 3 and 4, respectively. In both figures the horizontal axis is time in units of hours while the vertical axis is pixel intensity of a ribbon-based light modulator. The pixel intensity is directly related to ribbon deflection.
[34] In Fig. 5A, triangles indicate data points acquired at approximately 1.75, 2.5 and 3.5 hours after a constant voltage applied to a ribbon was turned on. Ribbon deflection in response to the constant applied voltage steadily increases as time passes. After 3.5 hours the ribbon in this test no longer responded to changes in applied voltage. The accumulation of surface charges became too great.
Appendix C
[35] In Fig. 5B, squares indicate data points acquired for a ribbon under a 50% discharge duty cycle and diamonds indicate data points acquired for a ribbon under a 12% discharge duty cycle, in both cases over a period of more than 20 hours. The intensity units in Fig. 5B are arbitrary and there is no significance to the fact that the square data points appear at higher intensity than the diamond data points. Both sets of data points show that pseudo bipolar drive schemes lead to consistent ribbon deflection versus applied voltage over several hours. At the end of each test the ribbons responded to applied voltages just as they had at the beginning of the tests.
[36] The embodiments of pseudo bipolar drive schemes have been described in terms of positive voltages with respect to ground. Clearly, however, negative voltages may be used.
[37] In conclusion pseudo bipolar MEMS ribbon drive methods described above are designed to avoid difficulties that may otherwise arise when unipolar CMOS electronics are used to drive MEMS ribbon devices. Surface charge accumulation in MEMS ribbon structures is reduced or eliminated so that ribbons may be controlled by electrical signals indefinitely with no degradation in ribbon response.
[38] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Appendix C
What is claimed is:
1. A method for driving a MEMS ribbon device comprising:
providing a MEMS ribbon device having a set of ribbons and a common electrode, the device characterized by charging time constant, Γ, when modeled as a capacitor; sending drive signals to the device in two alternating configurations:
a first configuration in which a first set of signals are represented by a first set of ribbon voltages and a first constant common electrode voltage of the same polarity as, and equal to or less in magnitude than, the first set of ribbon voltages; and,
a second configuration in which a second set of signals are represented by a second set of ribbon voltages and a second constant common electrode voltage of the same polarity as, and equal to or greater in magnitude than, the second set of ribbon voltages.
2. The method of Claim 1 wherein the second set of ribbon voltages are determined by:
(a) determining magnitudes of differences between the first set of ribbon voltages and the first constant common electrode voltage that would be needed to represent the second set of signals in the first configuration; and,
(b) subtracting the magnitudes determined in (a) from the second constant common electrode voltaee.
Appendix C
3. The method of Claim 1 wherein all voltages are positive with respect to ground.
4. The method of Claim 1 wherein all voltages are negative with respect to ground.
5. The method of Claim 1 wherein the first constant common electrode voltage is
approximately zero with respect to ground.
6. The method of Claim 1 wherein the second constant common electrode voltage is approximately equal to a supply voltage of a chip upon which the MEMS ribbon device is fabricated.
7. The method of Claim 1 wherein the common electrode is a substrate of a chip upon which the MEMS ribbon device is fabricated.
8. The method of Claim 1 wherein the first and second sets of signals are different.
9. The method of Claim 1 wherein the first and second sets of signals are the same.
10. The method of Claim 1 wherein the signals in the first configuration represent image data.
11. The method of Claim 1 wherein the signals in the first configuration represent video data.
Appendix C
12. The method of Claim 1 wherein the signals in the second configuration represent image data.
13. The method of Claim 1 wherein the signals in the second configuration represent video data.
14. The method of Claim 1 wherein the signals are in the first configuration 50% of the time and in the second configuration 50% of the time.
15. The method of Claim 1 wherein the signals are in the first configuration less than 50% of the time.
16. The method of Claim 1 wherein the signals are in the second configuration less than 50% of the time.
17. The method of Claim 1 wherein the signals represent image data that are grouped into image frames and each image frame is sent once in the first configuration and once in the second configuration.
18. The method of Claim 1 wherein the two signal configurations alternate in a time less than r.
19. The method of Claim 1 wherein the ribbon is in tension due to tensile stress in a stoichiometric silicon nitride layer in the ribbon.
Appendix C
Abstract of the Disclosure
A pseudo bipolar method for driving a MEMS ribbon device reduces charging effects in the device.
119
Appendix C
A B
Fig. 2
Appendix C
Fig. 3
Appendix C
video active time flyback blank time
Fig. 4
122
Appendix C
12% and 50% discharge duty cycles
JQ
■ 50%
♦♦♦♦♦♦ ♦♦♦♦ ♦ ♦ 12%
10 15 20 25
Time (h)
B
Fig. 5
123
Appendix D (US 7.286.277
Polarization light modulator
Related Applications
[01] This application is a continuation-in-part of US 11/161,452, filed on August 3, 2005, which is a continuation-in-part of US 10/904,766, filed on November 26, 2004, both of which are incorporated herein by reference.
Technical field
[02] The invention relates generally to visual display devices and light modulator systems. In particular it relates to differential, interferometric light modulator systems containing optical polarization-sensitive devices.
Background
[03] Display devices such as television sets and movie projectors often incorporate a modulator for the purpose of distributing light into a two- dimensional pattern or image. For example, the frames of a movie reel modulate white light from a projector lamp into shapes and colors that form an image on a movie screen. In modern displays light modulators are used to turn on and off individual pixels in an image in response to electronic signals that control the modulator.
[04] Texas Instruments introduced a microelectromechanical light modulator called a digital mirror device which includes millions of tiny mirrors on its surface. Each mirror corresponds to a pixel in an image and electronic signals in the chip cause the mirrors to move and reflect light in different directions to form bright or dark pixels. See, for example, U S Patent Number 4,710,732 incorporated herein by reference. Stanford University and Silicon Light Machines developed a microelectromechanical chip called a grating light modulator in which diffraction
Appendix D gratings can be turned on and off to diffract light into bright or dark pixels. See, for example, U S Patent Number 5,311,360 incorporated herein by reference.
[05] Both of these reflective and diffractive light modulation schemes for displays involve two-dimensional arrays of light modulator elements. However, it is also possible to make a display in which light is incident on a linear array of high speed light modulators. With appropriate magnifying optics and scanning mirrors, a linear array can be made to appear two-dimensional to an observer. Through the scanning action of a vibrating mirror a single row of light modulators can be made to do the work of as many rows of modulators as would be necessary to provide a real two-dimensional display of the same resolution. See, for example, U S Patent Number 5,982,553 incorporated herein by reference.
[06] Manhart introduced a display apparatus including a grating light-valve array and interferometric optical system. See US Patent Number 6,088,102 incorporated herein by reference. In Manhart a display system employs a planar grating light-valve (GLV) array as a spatial light modulator for representing an image to be displayed. The system relies for image representation on the position of moveable reflective elements of the GLV array, which move through planes parallel to the plane of the array. The moveable elements provide, from an incident phase-contrast wavefront, a reflected phase-modulated wavefront representing the image to be displayed. The displayed image is provided by interferometrically combining the phase-modulated wavefront with a reference wave-front also formed, directly or indirectly, from the incident phase-contrast wavefront.
[07] Many microelectromechanical light modulators are compatible with digital imaging techniques. Digital information may be sent electronically to the modulator. For example, gray scale images may be achieved by turning pixels on only part time. A pixel that is switched from bright to dark with a 50% duty cycle will appear to an observer to have a constant intensity half way between bright and dark. However, the pixel must be switched between bright or dark states faster than the human eye's critical flicker frequency of roughly 30 Hz or
Appendix D else it will appear to flicker. Therefore two-dimensional digital light modulators for displays must switch between states quickly to display a range of light levels between bright and dark.
[08] A one-dimensional digital light modulator array, scanned by a vibrating mirror to make it appear two-dimensional, must incorporate modulators with fast switching speeds. Each modulator element must switch on and off quickly to provide the impression of gray scale and this action must be repeated for each pixel in a line within the scanning period of the mirror. Grating light modulator devices in particular exhibit high switching speeds because their mechanical elements move only very short distances. The grating light modulator incorporates parallel ribbon structures in which alternating ribbons are deflected electrostatically to form diffraction gratings. The ribbons need only move a distance of one quarter wavelength of light to switch a grating on or off. It is also possible (and desirable in many instances) to operate one- or two-dimensional light modulators in analog, rather than digital, modes.
[09] Gudeman proposed an interferometric light modulator based on a mechanical structure very similar to the grating light modulator; see U S Patent Number 6,466,354 incorporated herein by reference. Gudeman's light modulator is a form of Fabry-Perot interferometer based on a ribbon structure.
[010] Microelectromechanical light modulators typified by the Texas
Instruments' digital mirror device and Stanford / Silicon Light Machines grating light modulator devices mentioned above have already enjoyed wide commercial success and have spawned other related designs. See, for example, U S Patent Number 6,724,515 incorporated herein by reference.
[01 ] The digital mirror device is comparatively slow and therefore is usually supplied as a two-dimensional mirror array. Usually two dimensional modulator arrays are more expensive to make than one-dimensional arrays and require a sophisticated addressing scheme for the mirrors. A two-dimensional array requires defect-free manufacturing of N x N pixels over a large chip area while a
Appendix D one-dimensional array with the same image resolution requires only N working pixels on a chip in a single line.
[012] Grating light modulator devices, while very fast, have limitations due to diffraction. A grating light modulator has a reflective state or configuration and a diffractive state. In the diffractive state incoming light is diffracted into the +1 and -1 diffraction orders of an optical grating. However, only about 80% of the light is collected in these two orders.
[0 3] An interferometric light modulator that has many desirable features was disclosed in "Differential interferometric light modulator and image display device," US 10/904,766 filed on November 26, 2004, incorporated herein by reference. That device features high speed and high contrast. The
interferometric design means that light is not lost in higher diffractive orders (as can be a problem in diffractive devices), nor does it require discriminating diffracted from undiffracted light.
[014] In US 10/904,766 a novel light modulator incorporates a polarizing prism to split light beams into components of orthogonal polarization. These polarization components are made to travel unequal distances in the modulator and are then recombined in the prism. When one polarization component is phase shifted with respect to the other, the overall polarization of the recombined beam is transformed. The polarization of the recombined beam is then analyzed by a polarizing beam splitter. Light intensity output from the polarizing beam splitter depends on the polarization state of the incident light beam which in turn depends on the relative phase shift of the polarization components.
[015] A phase shift is imparted to the orthogonal polarization components in the modulator by focusing them on, and causing them to reflect from, an engineered, uneven surface. This phase shift surface has regions of slightly different displacement which cause the light beams to travel slightly different distances upon reflection. A novel microelectromechanical system (MEMS) ribbon array device is provided that is used to modulate the phase shift of light beams reflected from the surface of its ribbons.
Appendix D
[016] Generalized and improved interferometric light modulators were disclosed in "Differential interferometric light modulator and image display system," US 11/161,452 filed on August 3, 2005, incorporated herein by reference. Optical polarization displacement devices, designs for MEMS optical phase shift devices and compensation schemes to improve field of view were described.
[017] In US 11/161,452 a differential interferometric light modulator and image display system comprises a polarizing beam splitter, a polarization displacement device and a MEMS optical phase shifting device. A linear array of MEMS optical phase shifting devices serves to modulate a line of pixels in the display. The polarizing beam splitter acts as both the polarizer and the analyzer in an interferometer. The polarization displacement device divides polarized light from a polarizer into orthogonal polarization components which propagate parallel to one another. The MEMS optical phase shifting device, or array of such devices, imparts a relative phase shift onto the polarization components and returns them to the polarization displacement device where they are recombined and sent to the analyzer. The MEMS optical phase shifting devices are electronically controlled and convert electronic image data (light modulation instructions) into actual light modulation.
[018] Further development is always possible, however. It would be desirable to have a polarization light modulator design that is as compact as possible.
Brightness and high contrast are important features of displays and are in need of continual improvement. For some applications, such as head-mounted displays, a viewer designed to be placed close to an observer's eye is needed.
Brief description of the drawings
[019] The drawings are heuristic for clarity.
[020] Figs. 1 A - 1 D show schematically various polarization separating optical elements.
[021] Figs. 2A & 2B show a design for a polarization light modulator.
Appendix D
[022] Figs. 3A & 3B show a design for a compact polarization light modulator.
[023] Figs. 4A & 4B show a design for a polarization light modulator for close-up viewing.
[024] Figs. 5A - 5C show schematically a MEMS optical phase shifting device.
[025] Figs. 6A & 6B show schematically cross-sections of the device illustrated in Fig. 5A.
[026] Figs. 7A & 7B show schematically a MEMS optical phase shifting device with an aperture.
[027] Fig. 8 shows schematically a MEMS optical phase shifting device with an aperture wider than that illustrated in Fig. 7A.
Detailed description
[028] Display systems manipulate light to form images of text, graphics and other visual scenes. Light propagation involves a complex variety of phenomena including wave properties and polarization. In related applications, US
10/904,766 and US 11/161,452, a new class of display systems was introduced that comprise polarization interferometers combined with MEMS devices that shift the phase of optical waves.
[029] In these new systems a linear array of MEMS optical phase shifting devices serves to modulate a line of pixels in a displayed image. A polarizing beam splitter acts as both the polarizer and the analyzer in an interferometer while a polarization displacement device divides polarized light from the polarizer into orthogonal polarization components. The MEMS optical phase shifting device array imparts a relative phase shift onto the polarization components and returns them to the polarization displacement device where they are recombined and sent to the analyzer. The MEMS optical phase shifting devices are electronically controlled and convert electronic image data (light modulation instructions) into actual light modulation.
Appendix D
[030] In the interferometric light modulators disclosed in US 10/904,766 and US 11/161 ,452, the direction of polarization displacement is parallel to the ribbons or cantilevers in the MEMS optical phase shift device. This means that the light forming a particular pixel comes from light that was reflected from different parts of a single ribbon or cantilever.
[031] In this application a different optical arrangement is disclosed in which orthogonal polarizations are displaced perpendicular to ribbons in a MEMS optical phase shift device. Accordingly, light forming a displayed pixel comes from light reflected from more than one ribbon or cantilever. Also disclosed herein are optical designs for compact polarization light modulators and displays for close up viewing. Designs for MEMS optical phase shift devices are presented including optimizations for high power handling.
[032] A polarization light modulator display relies on interferometry to modulate pixels in a displayed image. Interferometry in turn depends on manipulating the phase of light to produce constructive or destructive interference. An important part of a polarization light modulator is a device that separates polarization components of light so that the relative phase between them can be changed.
[033] Figs. 1A - 1D show schematically various polarization separating optical elements. Elements shown in Figs. 1A - 1D were introduced in related applications US 10/904,766 and US 11/161 ,452; however, additional features are described here.
[034] In Figure 1 A a Wollaston prism is shown. Figure 1 B shows a Wollaston prism in combination with a lens placed one focal distance away. Figure 1C shows a Savart plate. Figure 1 D shows a generalized polarization displacement device.
[035] The Wollaston prism shown in Figure 1A splits incoming light beam 102 into orthogonally polarized components 112 and 114. Light beams 112 and 114 propagate away from each other indefinitely since they exit the prism at different angles. The Wollaston prism is composed of two pieces of material 104 and 106 with optic axes oriented as shown by arrows 108 and 110.
Appendix D
[036] Dashed arrow 1 16 indicates that translation of the Wollaston prism perpendicular to incoming light beam 102 varies the properties of light beams 112 and 14. Translation varies the phase difference between the beams and therefore can be used to adjust the set point of an interferometer. Additionally, the prism can be tilted in the plane of the paper (i.e. about an axis perpendicular to the plane of the paper). Tilt can be used to make small adjustments in the separation angle, Θ. This degree of freedom is helpful when matching polarization displacement to the distance from one ribbon to an adjacent ribbon in a MEMS optical phase shift device.
[037] Figure 1 B shows a lens 160 placed one focal length away from a
Wollaston prism. This situation is similar to that shown in Figure 1 A except that the orthogonally polarized light beams 156 and 158 exiting the system are parallel to one another. It is desirable that polarization displacement devices have this property, namely that light beams leave them parallel to one another. That way the beams retrace their path upon reflection from a MEMS optical phase shifting device. The separation, d, is related to the focal length, f, and the separation angle, Θ, according to d = f■ Θ when Θ is a small angle.
[038] It is normally advantageous to replace two optical components with one whenever possible. Such a replacement is accomplished by the Savart plate illustrated in Figure 1 C. A Savart plate is an example of a walkoff crystal which imparts lateral displacement on polarization components of light incident upon it. (A Wollaston prism is an example of a birefringent prism which imparts angular separation on polarization components.) . In Figure 1 C input light beam 122 is divided into orthogonally polarized components 132 and 134. The Savart plate is composed of two pieces of material 124 and 126 with optic axes oriented as shown by arrows 28 and 130. Arrow 130 is dashed to indicate that it does not lie in the plane of the page; in fact, it forms a 45 degree angle with the plane of the page.
[039] Distances L1 and L2 indicate that thicknesses in the Savart plate vary the properties of light beams 132 and 134. These thicknesses can be designed to
Appendix D specify the set point of an interferometer. Additionally, the plate can be tilted in the plane of the paper (i.e. about an axis perpendicular to the plane of the paper). Tilt can be used to make small adjustments in the separation distance, d. This degree of freedom is helpful when matching polarization displacement to the distance from one ribbon to an adjacent ribbon in a MEMS optical phase shift device.
[040] In general any device can be used as a polarization displacement device as long as it has the effect shown in Figure 1D. An incoming light beam 162 is separated into two parallel light beams 164 and 166 which are polarized orthogonal to one another. Equivalently, if polarized light beams 164 and 166 are considered the input, then the device combines them into one beam 162. The polarization of beam 162 is then determined by the relative phase of the polarization components of beams 164 and 166.
[041] As described here and in US 10/904,766 and US 11/161,452, a polarization displacement device may be made from a Wollaston, Rochon or Senarmont prism in combination with a lens, a Savart plate or a modification thereof, or any other optical components which have the same effect.
[042] Figs. 2A & 2B show a design for a polarization light modulator. Figs. 2A and 2B are views of the same design from perpendicular perspectives. For convenience Fig. 2A may be referred to as a "top" view while Fig. 2B may be referred to as a "side" view.
[043] In both views, light from source 202 propagates through various optical elements before reflecting from MEMS optical phase shift device (MOPD) 220. On the return trip from MOPD 220 toward source 202 part of the light is deflected toward lens 208 by polarizing beam splitter 206. This is illustrated in Fig. 2B only; in Fig. 2A lens 208 is hidden behind polarizing beam splitter 206.
[044] Light from source 202 is focused at different places in different planes. For example in Fig. 2A the light is diverging from source 202 toward lens 204. In fact the source is placed approximately one focal length away from the lens so that light is collimated between lenses 204 and 2 2. MOPD 220 is placed
Appendix D approximately one focal length away from lens 212 such that the lens focuses
light on it. Viewed from the perpendicular direction in Fig. 2B, however, light from
source 202 is approximately collimated. Therefore, after the light passes through
lens 204, travels a distance approximately equal to the combined focal lengths of
lenses 204 and 212, and passes through lens 212, it is approximately collimated
when it reaches MOPD 220.
[045] An equivalent description is that light at MOPD 220 is focused in a narrow,
slit-shaped cross section. At MOPD 220 the light is elongated perpendicular to
the plane of the paper in Fig. 2A and in the plane of the paper in Fig. 2B. As
described below this elongated illumination of the ribbon array in MOPD 220 is
advantageous for efficient use of light and corresponding high brightness in a
display.
[046] Wollaston prism 210 and lens 212 form a polarization displacement device
as described in US 10/904,766 and US 1 /161 ,452. Accordingly different
polarization displacement devices may be substituted for them without altering
the principle of operation of the polarization light modulator.
[047] The spatial relationship between the elongated focusing direction and the
polarization displacement direction of the light in Figs. 2A and 2B differs from that
of previous designs described in US 10/904,766 and US 11/161 ,452. In previous
designs the polarization displacement device separated light into slit-shaped
beams that were offset in a direction perpendicular to the long axis of the slit- shaped cross section. Here the polarization displacement device (i.e. Wollaston
prism 2 0 and lens 212) separates light into slit-shaped beams that are offset in
a direction parallel to the long axis of the slit-shaped cross section. This is
indicated by dotted lines in Fig. 2B which show part of the light in the system
displaced by a distance, d, at MOPD 220. The displacement is not visible in Fig.
2A because it is perpendicular to the plane of the paper.
[048] Polarization components of light arriving at MOPD 220 are offset
perpendicular to the ribbons in the MOPD. This is also illustrated in Fig. 7A, for
example, where region 734 (bounded by a heavy dashed line) encompasses
"
Appendix D orthogonal polarizations of light that are offset by the width of ribbon 506 or 508 in a direction perpendicular to the ribbons and in the plane of the paper.
[049] In Figs. 2A and 2B it is helpful if source 202 is a line source; however, if it is not, its shape can be modified by beam shaping optics (not shown). Polarizing beam splitter 206 acts as both the polarizer and the analyzer in the
interferometer that it forms with the polarization displacement device (Wollaston prism 210 and lens 212) and the MOPD 220. Two arms of the interferometer are formed by orthogonal polarizations of light which travel slightly different paths to and from the MOPD.
[050] In Figs 2A and 2B source 202 is in line with lens 204, polarizing beam splitter 206, Wollaston prism 210, lens 212 and MEMS device 220. Light reflected by polarizing beam splitter 206 toward lens 208 forms a line image which may be scanned to create a two dimensional image. It is entirely possible however to place the light source in a position where lens 208 would focus its light into the optical system and to form a line image where source 202 is shown. The choice between these two equivalent arrangements depends on
practicalities such as contrast achieved by the polarizing beam splitter in transmission versus reflection, and the shape of the light source used.
[051] Figs. 3A & 3B show a design for a compact polarization light modulator. Figs. 3A and 3B are views of the same design from perpendicular perspectives. For convenience Fig. 3A may be referred to as a "top" view while Fig. 3B may be referred to as a "side" view. In the figures source 302 provides light that converges to a waist near MOPD 320 when viewed from the perspective of Fig. 3A but is collimated when viewed from the perpendicular perspective of Fig. 3B. Examples of suitable sources include line sources or point sources shaped by cylinder lenses (not shown).
[052] PDD 311 is a "polarization displacement device" as that term is defined in US 11/161,452. Its function is to offset orthogonally polarized components in an incoming light beam into two parallel beams of light. An example of a polarization displacement device is a polarizing prism, such as a Wollaston or
Appendix D
Rochon prism, in combination with a lens. MOPD 320 is a "MEMS optical phase shift device" as that term is defined in US 11/161 ,452. Its function is to impart an electronically controllable phase shift upon incident light. Many types of MOPD were discussed in US 11/161 ,452. Details of one MOPD are discussed here in connection with Figs. 5 - 8.
[053] In Fig. 3B lens 308 is placed approximately one focal length away from MOPD 320. The lens is not shown in Fig. 3A because it is hidden behind polarizing beam splitter 306 in that view. Also drawn in Fig. 3B is a graph 330 of light intensity, /, versus position, x, in the focal plane of lens 308. In other words the dotted x-axis of graph 330 and MOPD 320 are both approximately one focal length away from lens 308, albeit in opposite directions. Two intensity plots 332 and 336 are drawn on graph 330. Item 340 is a double-slit aperture or stop.
[054] The dotted x-axis of graph 330 lies in the Fourier plane for MOPD 320. Thus when MOPD is modulated, for example, in a square wave pattern where every other ribbon is deflected, the light intensity at the Fourier plane will be approximately that shown by plot 332. When the MOPD is unmodulated, i.e. when no ribbons are deflected, the light intensity at the Fourier plane will be approximately that shown by plot 336.
[055] The available contrast between dark and light states in the polarization light modulators described so far is determined mainly by the ability of the polarizing beam splitter to discriminate between polarizations. In an ideal case all light of one polarization is transmitted by the polarizing beam splitter while all light of the orthogonal polarization is reflected. In practice, however, some light in the "wrong" polarization leaks through or is reflected unintentionally.
[056] Double-slit aperture or stop 340 may be used to increase contrast in a polarization light modulator. If aperture 340 is placed at the Fourier plane of lens 308 it blocks light when MOPD 320 is unmodulated but passes light when the MOPD is modulated. This increases the contrast that is provided by the polarization discrimination of polarizing beam splitter 306.
Appendix D
[057] The dotted x-axis of graph 330 lies in the Fourier plane of the MOPD as a whole; however, it is not the image plane for pixels in the line image that are modulated at the MOPD. When lens 308 is placed one focal length from MOPD 320, the line image is formed at infinity. The image can be brought closer to the lens by moving the lens away from the MOPD in accordance with the lens- maker's formula 1/d/ + Md2 = 1/f where di and d2 are distances to the image and the MOPD measured from the lens. Alternatively the image may be viewed with additional optics (not shown).
[058] Figs. 4A & 4B show a design for a polarization light modulator for close-up viewing. Such a design is appropriate for head-mounted displays where the observer's eye is close to the device.
[059] Figs. 4A and 4B are views of the same design from perpendicular perspectives. For convenience Fig. 4A may be referred to as a "top" view while Fig. 4B may be referred to as a "side" view. In the figures, source 402 provides light that is collimated when viewed from the perspective of Fig. 4A, but is diverging toward lens 404 when viewed from the perpendicular perspective of Fig. 4B. In Fig. 4B the source diverges from a location such that lens 404 collimates the light; i.e. the source is approximately one focal length away from lens 404. Examples of suitable sources include line sources or point sources shaped by cylinder lenses (not shown).
[060] Item 406 in the figures is a thin polarizing beam splitter that also acts as a scanning mirror. It can be rotated about an axis (not shown) perpendicular to the paper in Fig. 4B. The curved arrow near the thin polarizing beam splitter 406 in Fig. 4B indicates the approximate scanning motion. Lens 407 is located approximately one focal length away from MOPD 420; item 41 1 is a polarization displacement device.
[061] Viewed from the perspective of Fig. 4A light is focused to a waist between lenses 404 and 407 while it remains collimated between those two lenses in the perpendicular perspective of Fig. 4B. The focus need not coincide with the position of thin polarizing beam splitter 406.
Appendix D
[062] The eye of an observer is drawn schematically in Fig. 4B as item 424; the lens of the eye is item 426. When lens 407 is placed one focal length from MOPD 420 the image of the MOPD appears at infinity. However, the lens 426 in the eye 424 of an observer forms the image on the retina in the back of the eye for easy viewing. The image is a line image that originates from a thin sheet of light modulated by a linear array of electronically controlled phase shifting surfaces in the MOPD. When thin polarizing beam splitter 406 rotates, the line image moves across the retina in an observer's eye. This scanning motion is used to create a two dimensional image from the line image.
[063] Figs. 5A - 5C show schematically a MEMS optical phase shifting device. Figs. 5B and 5C are cross sections of Fig. 5A along the lines indicated. In Figs. 5A - 5C item 502 is a substrate or support base; 504 is an end support; 510 is an intermediate support. Items 506 and 508 are ribbon structures; 506 is a ribbon supported by intermediate supports while 508 is a ribbon without intermediate supports. In Figs. 5A - 5C only eight ribbons are shown while an actual device may contain hundreds or thousands of ribbons. The figure is schematic only.
[064] Fig. 5B shows that at the cross section marked "5B" in Fig. 5A there are no supports between the substrate and the ribbons. Conversely Fig. 5C shows that at the cross sections marked "5C" in Fig. 5A there are supports for every other ribbon. In Fig. 5A cross sections "5C" are marked approximately 1/3 of the way from the ends of the ribbons and this is a preferred arrangement; however, other designs are possible. It is only important that every other ribbon is stiffened, by supports or other means, and that the center portion where light is reflected by the ribbons is free of supports.
[065] Devices of the type shown in Figs. 5A - 5C may be constructed using any standard MEMS fabrication processes such as those outlined in US 10/904,766. Although the drawings are not to scale one may appreciate the size of a typical device by noting^that the ribbons are normally between about one and one hundred microns long; they flex toward the substrate by roughly 0.05 to 0.5 microns.
Appendix D
[066] Figs. 6A & 6B show schematically cross-sections of the device illustrated in Fig. 5A at the sections marked "6A" and "6B" respectively. All of the numbered items in Figs. 6A and 6B correspond to the like-numbered items in Figs. 5A - 5C. Voltage signal or source 610 was not illustrated in Figs. 5.
[067] When a voltage is applied to a ribbon that is only supported at its ends, as exemplified by ribbon 508, the ribbon flexes toward the substrate. The distance, D, that the ribbon is deflected is approximately one quarter wavelength of light in normal operation of an MOPD in the polarization light modulators of Figs. 2 - 4. Conversely when a voltage is applied to a ribbon that is supported by
intermediate supports, as exemplified by ribbon 506, the ribbon flexes far less than in the unsupported case. Ribbon 506 in Fig. 6B is drawn as not flexing at all; in practice it may flex slightly. The deflection is a nonlinear function of the distance between supports, however, so it can be significantly different for the supported and unsupported ribbons.
[068] An advantage of using supports to stiffen alternating ribbons is that each ribbon can be the same thickness and made from the same material. However alternate methods besides supports may be used if the end result remains the same: alternating ribbons are deflected different amounts under the influence of an applied voltage.
[069] Figs. 7A & 7B show schematically a MEMS optical phase shifting device with an aperture. Fig. 7A shows a view of an MOPD similar to the view shown in Fig. 5A while Fig. 7B shows a view similar to that in Fig. 6A. In Figs. 7 however, an aperture 722 has been placed over the ribbon structure.
[070] In Figs. 7 items 502 - 610 are the same as like-numbered items in Figs. 5 and 6. Item 710 is a spacer. Items 720 and 722 form an aperture structure from a clear sheet 720 with an opaque coating 722. In Fig. 7A aperture structure 720 / 722 is shown in a cutaway view. The hatched area and dotted line represented by 730 show the approximate extent of an elongated light beam incident upon the MOPD. Rays 731 also represent the light beam as viewed from a direction perpendicular to the direction of propagation.
Appendix D
[071] Bounded region 732 represents the transverse extent of light that has passed through aperture structure 720 / 722 and is incident on the ribbons of the OPD. Within bounded region 732, area 734, which is delineated by a heavy dashed border, shows the portion of the MOPD from which reflected light makes up a single pixel in the line image output from a polarization light modulator such as any of the modulators illustrated in Figs. 2 - 4.
[072] Aperture structure 720 / 722 prevents stray light that would not contribute to a line image from being reflected by the MOPD ribbons. Preferably the aperture does not affect the polarization of light reflecting from it. In Figs. 7 the aperture is shown as a being formed by a patterned, opaque coating on a clear substrate such as glass; however, an aperture formed in another way but performing the same function is also acceptable. The aperture is separated from the ribbons of the MOPD by spacer 710. To keep the aperture in the near field, the spacer thickness should be less than ~ w^A, where w is the size of the aperture and λ is the wavelength of light.
[073] Area 734 represents the area of the ribbon device from which reflected light forms a single pixel in a line image. Area 734 is shown as being
approximately square in the figure, but it may be rectangular in practice. The length of one side of the area is set by the width of the open slot in aperture 720 / 722. The length of a perpendicular side of the area is equal to the width of two ribbons in the MOPD. Recall that the PDD in the polarization light modulators of Figs. 2 - 4 provides an offset for one polarization of light incident upon an MOPD. The magnitude of the offset is shown by "d" in Figs. 1 B - 1 D, 2B, and by dotted lines in Figs. 3B and 4A.
[074] The polarization light modulator is designed so that the offset matches the width of a ribbon in the MOPD. That way the interferometer in the polarization light modulator compares the phase of light reflected by adjacent ribbons in the MOPD. As one of the ribbons in an adjacent pair moves while the other remains stationary, the phase of light reflected by the ribbons varies by 4πΟ/λ where D is the displacement of the moving ribbon and A is the wavelength of the light.
Appendix D
[075] Fig. 8 shows schematically a MEMS optical phase shifting device with an aperture wider than that illustrated in Fig. 7A. In Fig. 8 bounded region 832 is drawn approximately twice as wide as corresponding bounded region 732 in Fig. 7; area 834 is similarly represented as a rectangle instead of square 734. The figure does not purport to illustrate the precise aspect ratios of these areas; but the possibility of using different aspect ratios is important. Light beam cross section 830 has a less elongated shape than corresponding beam 730.
[076] All other things being equal the light incident upon the MOPD in Fig. 8 is spread over a wider area than that in Fig. 7A. Therefore if a material limitation makes it necessary to restrict the intensity (power per unit area) of light falling on the MOPD then more power can be applied to the MOPD in Fig. 8 compared to that of Fig. 7A. Fig. 8 represents a design with more power handling capacity and therefore one that can lead to a brighter displayed image than the one in Fig. 7A. An incoming light beam can be expanded for operation with a wider aperture slot as in Fig. 8 through the use of cylinder optics.
[077] Polarization light modulators described herein focus light in an elongated beam cross section on a linear array MOPD. Orthogonal polarizations are displaced parallel to the long axis of the elongated beam cross section. Compact modulator designs optimized for high brightness and contrast were described.
[078] As one skilled in the art will readily appreciate from the disclosure of the embodiments herein, processes, machines, manufacture, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, means, methods, or steps.
[079] The above description of illustrated embodiments of the systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise form disclosed. While specific embodiments of, and examples for, the systems and methods are described herein for illustrative purposes, various
Appendix D equivalent modifications are possible within the scope of the systems and methods, as those skilled in the relevant art will recognize. The teachings of the systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above.
[080] In general, in the following claims, the terms used should not be construed to limit the systems and methods to the specific embodiments disclosed in the specification and the claims, but should be construed to include all systems that operate under the claims. Accordingly, the systems and methods are not limited by the disclosure, but instead the scope of the systems and methods are to be determined entirely by the claims.
Appendix D
What is claimed is:
1 A polarization light modulator comprising:
a polarizing beam splitter;
a polarization displacement device; and,
a MEMS optical phase shift device; wherein,
light propagates through the polarizing beam splitter and the polarization displacement device before arriving at the MEMS optical phase shift device in a beam of elongated cross section oriented perpendicular to ribbons in the MEMS optical phase shift device.
2 A modulator as in Claim 1 wherein the polarization displacement device offsets orthogonal polarizations of light in a direction parallel to the long axis of the elongated cross section.
3 A modulator as in Claim 2 wherein the polarizing beam splitter also functions as a scanning mirror.
4 A modulator as in Claim 2 wherein a line image output from the modulator is focused near infinity for close viewing by an observer.
5 A modulator as in Claim 2 further comprising an aperture that blocks stray light from arriving at the MEMS optical phase shift device.
6 A modulator as in Claim 2 further comprising a double-slit aperture that improves contrast in a line image.
7 A modulator as in Claim 2 wherein the MEMS optical phase shift device comprises ribbons of alternating stiffness.
8 A modulator as in Claim 2 wherein the MEMS optical phase shift device comprises ribbons which are supported at their ends and in which alternating ribbons are further supported by intermediate supports.
Appendix D
Abstract
Polarization light modulators are based on an interferometric design in which a polarizing beam splitter serves as a polarizer and analyzer. A polarization displacement device shifts orthogonally polarized light beams parallel to the long axis of their elongated cross sections. Phase shifts are imparted to the orthogonally polarized beams by linear array MEMS optical phase shift devices. The output of the light modulator is a line image which may be scanned to form a two-dimensional image. Features to improve brightness, contrast and overall compactness of design are disclosed.
Appendix D
Appendix D
o o.
Appendix D
Appendix D