Embodiment
Below, at describing as the 1st of preferred forms of the present invention~the 11st execution mode.
In the 1st execution mode, describe at the visual processing unit that utilizes 2 dimension LUT.
In the 2nd execution mode, describe at the visual processing unit that carries out the correction of surround lighting under the situation that in the environment of display image, has surround lighting.
In the 3rd execution mode, describe at the application examples of the 1st execution mode and the 2nd execution mode.
In the 4th~the 6th execution mode, the visual processing unit of handling at the gray scale that realizes visual effect is improved describes.
In the 7th execution mode, describe at the visual processing unit that uses suitable blurred signal to carry out visual processing.
In the 8th execution mode, describe at the application examples of the 4th~the 7th execution mode.
In the 9th execution mode, describe at the application examples of the 1st~the 8th execution mode.
In the 10th execution mode, the application examples that is applied to display unit at the visual processing unit of above-mentioned execution mode describes.
In the 11st execution mode, the application examples that is applied to filming apparatus at the visual processing unit of above-mentioned execution mode describes.
(the 1st execution mode)
Use Fig. 1~Figure 10, describe at the visual processing unit 1 that utilizes 2 dimension LUT as the 1st execution mode of the present invention.And, use Figure 11~Figure 14, describe at the variation of visual processing unit.In addition, use Figure 15~Figure 23, describe at the visual processing unit of realization with the visual processing of visual processing unit 1 equivalence.
Visual processing unit 1 is the device of visual processing such as a kind of spatial manipulation of carrying out picture signal, gray scale processing.Visual processing unit 1 in the machine that the image in for example computer, television set, digital camera, portable phone, PDA, printer, the scanner etc. is handled, constitutes device and image processing apparatus that the look that carries out picture signal is handled.
(visual processing unit 1)
Fig. 1, expression will be carried out the basic comprising of visual processing unit 1 of visual processing image (output signal OS) output of visual processing to picture signal (input signal IS).Visual processing unit 1 possesses: spatial manipulation portion 2, and it presses the brightness value to each pixel of the original image that obtains as input signal IS, carries out spatial manipulation, and unsharp signal US is exported; With visual handling part 3, input signal IS and unsharp signal US that it uses about same pixel carry out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 2, its low frequency space filter that passes through by the low frequency space that for example only makes input signal IS obtains unsharp signal US.As the low frequency space filter, can use FIR (the Finite Impuls Respons that generally uses in the generation of unsharp signal, finite impulse response (FIR)) low frequency space filter of the low frequency space filter of type or IIR (Infinite Impulse Response, infinite impulse response) type etc.
Visual handling part 3, have give input signal IS and unsharp signal US, and output signal OS between 2 dimension LUT4 of relation, for input signal IS and unsharp signal US, output signal OS is exported with reference to 2 dimension LUT4.
(2 dimension LUT4)
In 2 dimension LUT4, login is called the matrix data of description document data.The description document data, have row corresponding (perhaps row) and the row (perhaps go) corresponding with each pixel value of unsharp signal US with each pixel value of input signal IS, as the key element of ranks, preserve the pixel value of the output signal OS corresponding with the combination of input signal IS and description document signal US.The description document data by being built in or being connected in the description document data entry device 8 of visual processing unit 1, thereby are logined in 2 dimension LUT.In the description document data entry device 8, preserve by prior a plurality of description document data of making such as PC (PC).For example, preserve a plurality of description document data that realize that contrast is strengthened, the D scope is compressed processing or gray correction etc. (in detail with reference to following " description document data " hurdle).Like this, in visual processing unit 1, use the login content of the description document data of 8 pairs 2 dimensions of description document data entry device LUT4 to change, can realize various visual processing.
Fig. 2 represents an example of description document data.Description document data as shown in Figure 2 are a kind of in visual processing unit 1, realize and description document data by the processing of visual processing unit 400 equivalences shown in Figure 108.In Fig. 2, the description document data, with the performance of 64 * 64 matrix form,, represent value with 6 of the high positions of the brightness value of the existing unsharp signal US of 8 bit tables at line direction (laterally) in 6 the value of column direction (vertically) expression with the high position of the brightness value of the existing input signal IS of 8 bit tables.And as the key element of the ranks of 2 brightness value correspondences, the value of expression output signal OS is 8.
The value C of each key element of description document data as shown in Figure 2 (value of output signal OS), use the value A (value after for example will casting out) of input signal IS and the value B (value after for example will casting out) of unsharp signal US, represent by C=A+0.5 * (A-B) (below be called formula M11) by 2 of the low levels of the existing unsharp signal US of 8 bit tables by 2 of the low levels of the existing input signal IS of 8 bit tables.That is, in visual processing unit 1, expression is carried out and the processing of using visual processing unit 400 (with reference to Figure 108) equivalence of strengthening function R 1 (with reference to Figure 109).
In addition, the combination of the value of value A by input signal IS and the value B of unsharp signal US, thus the value C that is obtained by formula M11 becomes negative value.In this case, with the key element of description document data corresponding between the value B of the value A of input signal IS and unsharp signal US, can be value 0.And, the combination of the value between value A by input signal IS and the value B of unsharp signal US, thus the value C that is obtained by formula M11 can be saturated.That is, can surpass by 8 expressible maximums 255.In this case, with the key element of description document data corresponding between the value B of the value A of input signal IS and unsharp signal US, can be value 255.In Fig. 2, represent with contour according to each key element of the description document data of obtaining like this.
And, for example, if the value C of each key element, using description document data by C=R6 (B)+R5 (B) * (A-B) (below be called M12) performance, then can realize and processing by visual processing unit 406 equivalences shown in Figure 110.At this, function R 5 is functions of from unsharp signal US amplification coefficient signal GS being exported in the 1st transformation component 409, and function R 6 is to export the function of revising unsharp signal AS in the 2nd transformation component 411 from unsharp signal US.
And then, if the value C of each key element, using description document data by C=A+R8 (B) (below be called formula M13) performance, then can realize and processing by visual processing unit 416 equivalences shown in Figure 111.At this, function R 8 is the functions of output processing signals LS from unsharp signal US.
In addition, when the value C of certain key element of the description document data of being obtained by formula M12, formula M13 surpassed the scope of 0≤C≤255, then the value of its key element can be 0 or 255.
(visual processing method and visual handling procedure)
Fig. 3 represents to illustrate the flow chart of the visual processing method in the visual processing unit 1.Visual processing method shown in Figure 3, be a kind of in visual processing unit 1 by method hard-wired, that carry out the visual processing of input signal IS (with reference to Fig. 1).
In visual processing method shown in Figure 3, input signal IS carries out spatial manipulation (step S11) by the low frequency space filter, obtains unsharp signal US.And then, with reference to the value of 2 corresponding between input signal IS and unsharp signal US dimension LUT4, with output signal OS output (step S12).More than handle is to be undertaken by each pixel that is transfused to as input signal IS.
In addition, each step of visual processing method as shown in Figure 3 is by computer etc., realizes as visual handling procedure.
(effect)
(1)
Under the situation of only carrying out visual processing, (for example carry out the situation etc. of conversion),, then carry out the conversion of same brightness if there is the pixel of same concentrations in different places in image according to the one dimension gray-scale transformation curve based on the value A of input signal IS.More particularly, the darker place of personage's background brightens in the image if make, and then the personage's of same concentrations hair also can brighten.
Compare with it, in visual processing unit 1, use between the value B based on the value A of input signal IS and unsharp signal US the corresponding 2 description document data of tieing up the function mades, carry out visual processing.Therefore, the pixel of the same concentrations that place different in image is existed is not equally to carry out conversion, but comprises peripheral information interior or brighten, and perhaps deepening can make each zone in the image become the adjustment of optimal brightness.More particularly, do not change the hair concentration of the personage in the image, the background of same concentrations is brightened.
(2)
In visual processing unit 1, use 2 dimension LUT4, carry out the visual processing of input signal IS.Visual processing unit 1 has the hardware that does not rely on the visual treatment effect of being realized and constitutes.That is, visual processing unit 1 can be made of the hardware with versatility, in the reduction of hardware cost etc. effectively.
(3)
The description document data of login in 2 dimension LUT4 can change by description document data entry device 8.Therefore, in visual processing unit 1,, just can change, so can realize various visual processing to the description document data because of the hardware that does not change visual processing unit 1 constitutes.More particularly, in visual processing unit 1, handle and the gray scale processing implementation space simultaneously.
(4)
The description document data of login in 2 dimension LUT4, but calculated in advance.The description document data of having made realize numerous and diverse processing in any case, and to use the needed time of visual processing of this method all be constant.Therefore, under situation about being made of hardware or software, even become the visual processing of numerous and diverse formation, under the situation of using visual processing unit 1, the processing time does not rely on numerous and diverse degree of visual processing yet, can realize the high speed of visual processing yet.
(variation)
In Fig. 2, the description document data of the matrix form at 64 * 64 are described.At this, effect of the present invention, and do not rely on the size of description document data.For example, 2 dimension LUT4 also can have the corresponding passivation data of combination of all values that can obtain input signal IS and unsharp signal US.For example, by existing input signal of 8 bit tables and unsharp signal US the time, the description document data can be 256 * 256 matrix forms.
In this case, though the needed memory size of 2 dimension LUT4 increases, can realize more accurate visual processing.
(2)
In Fig. 2, the declarative description file data, in store about with the value of 6 of the high positions of the brightness value of the existing input signal IS of 8 bit tables with value value, output signal OS of 6 of the high positions of the brightness value of the existing unsharp signal US of 8 bit tables.At this, visual processing unit 1 also possesses interpolation portion, its key element based on the description document data of adjacency, with the size of 2 of the low levels of input signal IS and unsharp signal US, the value of output signal OS is carried out linear interpolation.
In this case, do not increase the needed memory size of 2 dimension LUT4, just can realize more accurate visual processing.
And, in the visual handling part 3, possessing interpolation portion, it will carry out value behind the linear interpolation to 2 dimension values that LUT4 preserved, export as output signal OS.
Fig. 4 represents as visual handling part 500 variation of visual handling part 3, that possess interpolation portion 501.Visual handling part 500 possesses: 2 dimension LUT4, its give input signal IS and unsharp signal US, with the preceding output signal NS of interpolation between relation; With interpolation portion 501, output signal NS, input signal IS and unsharp signal US before its input interpolation export output signal OS.
2 dimension LUT4, in store about by the value of 6 of the high positions of the brightness value of the existing input signal IS of 8 bit tables, with by the value of 6 of the high positions of the brightness value of the existing unsharp signal US of 8 bit tables, the value of output signal NS before the interpolation.The value of output signal NS before the interpolation, in store as for example 8 value.Among the 2 dimension LUT4, when 8 place values of 8 place values of input signal IS and unsharp signal US are transfused to, then will export with the value that comprises each value output signal NS before interior 4 interval corresponding interpolations.What is called comprises each value in interior interval, be meant for (the value that the high position of input signal IS is 6, the value that the high position of unsharp signal US is 6), (minimum 6 place values of the value that the high position of excess input signal IS is 6, a high position 6 place values of unsharp signal US), (the value that the high position of excess input signal IS is 6, the value that the minimum of the value that the high position of unsharp signal US is 6 is 6), (the value that the minimum of the value that the high position of excess input signal IS is 6 is 6, the value that the minimum of the value that the high position of unsharp signal US is 6 is 6) each combination, the interval that output signal NS is surrounded before in store 4 interpolations.
To interpolation portion 501, with the value input of 2 of the low levels of the value of 2 of the low levels of input signal IS and unsharp signal US, use these values, carry out linear interpolation to having exported before 24 interpolations tieing up behind the LUT4 value of output signal NS.More particularly, use the value of 2 of the low levels of the value of 2 of low levels of input signal IS and unsharp signal US, calculates the weighted average of the value of the preceding output signal NS of 4 interpolations, output signal OS is exported.
By more than, do not increase by 2 dimension LUT4 needed memory sizes, just can realize more definite visual processing.
In addition, in interpolation portion 501,, the either party among input signal IS or the unsharp signal US just can as long as being carried out linear interpolation.
(3)
In the spatial manipulation of being undertaken by spatial manipulation portion 2, input signal IS for relevant concerned pixel, mean value (simple average or weighted average), maximum, minimum value or the median of input signal IS between the neighboring pixel of concerned pixel and concerned pixel can be exported as unsharp signal US.And, also mean value, maximum, minimum value or the median of the only neighboring pixel of concerned pixel can be exported as unsharp signal US.
(4)
In Fig. 2, the value C of each key element of description document data is that M11 makes based on linear function for each of the value B of the value A of input signal IS and unsharp signal US.On the other hand, the value C of each key element of description document data is for the value A of input signal IS, makes based on nonlinear function.
In this case, for example, realize that the corresponding visual processing of visual characteristic or realize is suitable for the visual processing with the image nonlinear characteristic that handle, machine of the computer of output signal OS output, television set, digital camera, portable phone, PDA, printer, scanner etc.
And the value C of each key element of description document data can be for each of the value B of the value A of input signal IS and description document signal US, based on nonlinear function, promptly 2 dimension nonlinear functions are made.
For example, based on the value A of input signal IS, carry out under the situation of visual processing (for example carrying out the situation etc. of conversion),, then carry out the conversion of same brightness if there is the pixel of same concentrations in different places in the image according to 1 dimension gray-scale transformation curve only.More particularly, the darker place of personage's background brightens in the image if make, and then the personage's of same concentrations hair also can brighten.
On the other hand, in the description document data of using based on the nonlinear function made of 2 dimensions, carry out under the situation of visual processing, be not with the same conversion of pixel of the different local same concentrations that exist in the image, but comprise peripheral information interior or brighten or deepening, can carry out the adjustment of optimal brightness to each zone in the image.More particularly, do not change the concentration of the hair of the personage in the image, the background of same concentrations is brightened.And then, in visual processing, even, also can keep the visual processing of gray scale about the satisfied pixel region of pixel value after handling based on linear function.
Fig. 5 represents an example of such description document data.Description document data as shown in Figure 5 are that a kind of making at visual processing unit 1 realized the description document data that the contrast in the visual characteristic is strengthened.In Fig. 5, the description document data, realize with 64 * 64 matrix form,, represent value with 6 of the high positions of the brightness value of the existing unsharp signal US of 8 bit tables at line direction (laterally) in the value of column direction (vertically) expression with 6 of the high positions of the brightness value of the existing input signal IS of 8 bit tables.And as the ranks key element of 2 brightness value correspondences, the value of expression output signal OS is 8.
The value C of each key element of description document data as shown in Figure 5 (value of output signal OS), use value B (value after for example will casting out), transforming function transformation function F1, the transforming function transformation function of value A (value after for example will casting out), the unsharp signal US of input signal IS by 2 of the low levels of the existing unsharp signal US of 8 bit tables by 2 of the low levels of the existing input signal IS of 8 bit tables inverse transform function F2, strengthen function F 3, by C=F2 (F1 (A)+F3 (F1 (A)-F1 (B))) (below be called formula M14) expression.At this, transforming function transformation function F1 is the common logarithm function.Inverse transform function F2 is the contrafunctional exponential function (antilogarithm function) as the common logarithm function.Strengthen function F 3, be to use the arbitrary function among reinforcement function R 1~R3 that Figure 109 illustrates.
In these description document data, realize being transformed into visual processing to input signal IS behind the number space and unsharp signal US by transforming function transformation function F1.Human visual characteristic is a logarithm, by being transformed into the number space and handling, thereby realizes being suitable for the visual processing of visual characteristic.Like this, in visual processing unit 1, just be implemented in contrast intensity to the number space.
In addition, the combination of the value of value A by input signal IS and the value B of unsharp signal US, thus the value C that is obtained by formula M14 becomes negative value.In this case, with the value B of the value A of input signal IS and unsharp signal US between the key element of corresponding description document data can be 0.And, the combination of the value between value A by input signal IS and the value B of unsharp signal US, thus become saturated by the value C that formula M14 is obtained.That is, can surpass by the existing maximum 256 of 8 bit tables.In this case, with the key element of description document data corresponding between the value B of the value A of input signal IS and unsharp signal US, can be value 255.In Fig. 5,, represent by contour according to each key element of the description document data of obtaining like this.
To the more detailed description of nonlinear description document data, in following carrying out (description document data).
(5)
2 tie up the description document data that LUT4 possess, and can comprise the gray-scale transformation curve (gamma curve) of the gray correction of a plurality of realization input signal IS.
Each bar gray-scale transformation curve for example is the monotone increasing functions such as gamma function with different gamma factor, is associated with the value of unsharp signal US.Being associated for example is value for less unsharp signal US, and the gamma function of selecting to have bigger gamma factor carries out.Like this, unsharp signal US realizes as the effect that is used for selecting from the gray-scale transformation curve group that the description document data comprise the selection signal of 1 gray-scale transformation curve at least.
By above formation, use the selected gray-scale transformation curve of value B by unsharp signal US, carry out the greyscale transformation of the value A of input signal IS.
In addition, and by illustrated same of above-mentioned formula (2), also can carry out interpolation to the outputs of 2 dimension LUT4.
(6)
Description document data entry device 8 is built in or is connected in visual processing unit 1, preserves a plurality of description document data of making in advance by PC etc., has illustrated that the login contents to 2 dimension LUT4 change.
At this, the description document data that description document data entry device 8 is preserved are by the PC making of the outside that is arranged on visual processing unit 1.Description document data entry device 8 via network or via recording medium, obtains the description document data from PC.
Description document data entry device 8, condition is according to the rules logined a plurality of description document data of being preserved in 2 dimension LUT4.Use Fig. 6~Fig. 8, be described in detail.In addition, at the part with visual processing unit 1 essentially identical function illustrated with using Fig. 1, additional identical symbol omits its explanation.
【1】
Fig. 6 represents to judge the image of input signal IS, based on result of determination, switches in the block diagram of the visual processing unit 520 of the description document data of logining among the 2 dimension LUT4.
Visual processing unit 520, except that with the same structure of visual processing unit shown in Figure 11, also possess description document data entry portion 521, it possesses the function same with description document data entry device 8.And then visual processing unit 520 possesses spectral discrimination portion 522.
Spectral discrimination portion 522, it is with input signal IS input, with the result of determination SA output of input signal IS.Description document data entry portion 521, its input result of determination SA will export based on the selected description document data of result of determination S PD.
Spectral discrimination portion 522, its image to input signal IS is judged.In the judgement of image, by obtaining the brightness of input signal IS, the pixel value of lightness etc., thereby the brightness of judgement input signal IS.
Description document data entry portion 521, it obtains result of determination SA, based on result of determination SA, switches output description document data PS.More particularly, for example judge input signal IS when bright, selecting description document that dynamic range is compressed etc.Like this, for the bright image of integral body, also can keep contrast.And the characteristic of the device of consideration expression output signal OS is selected the description document with the output signal OS output of suitable dynamic range.
By more than, in visual processing unit 520,, just can realize suitable visual processing according to input signal IS.
In addition, the pixel value of the brightness, lightness etc. of input signal IS is not only judged by spectral discrimination portion 522, goes back the picture characteristics of decision space frequency etc.
In this case, for the lower input signal IS of for example spatial frequency, can realize selecting strengthening higher description document of the degree of definition etc., more suitably visual processing.
【2】
Fig. 7 represents based on the input results from the input unit that is used to import the condition relevant with brightness, switches in the block diagram of the visual processing unit 525 of the description document data of logining among the 2 dimension LUT4.
Visual processing unit 525, except that with the same structure of visual processing unit shown in Figure 11, possess description document data entry portion 526 in addition with description document data entry device 8 said functions.And then visual processing unit 525 possesses the input unit 527 by wired or wireless connections.More particularly, input unit 527 is realized as the enter key that the computer of output signal OS output, television set, digital camera, portable phone, PDA, printer, scanner etc., machine itself that image is handled are possessed or the remote control of each machine etc.
Input unit 527 is a kind of input units that are used to import the condition relevant with brightness, for example possesses the switch of " bright " " secretly " etc.Input unit 527 by user's operation, is exported input results SB.
Description document data entry portion 526, it obtains input results SB, based on input results SB, switches output description document data PD.More particularly, for example, when user's input " bright ", the description document data that selection is compressed the dynamic range of input signal IS etc. are exported as description document data PS.Like this, even during the state of the environment dimension " bright " of putting at the device that shows output signal OS, also can keep contrast.
By more than, in visual processing unit 525,, just can realize suitable visual processing according to input from input unit 527.
In addition, the so-called condition relevant with brightness, not only have and the relevant condition of brightness, for example also have the relevant condition of brightness with the medium itself that the output signal of printer media etc. is exported with the surround lighting of the medium periphery of the output signal of computer, television set, digital camera, portable phone, PDA etc. output.And, can also be for example with the relevant conditions such as brightness of scanner with the medium of input signals such as paper inputs itself.
And these conditions not only can also can be imported by uterus such as light-sensitive elements by inputs such as switches.
In addition, input unit 527, not only the input condition relevant with brightness also is to be used for for description document data entry portion 526, directly makes the switching of description document produce the device of action.In this case, input unit 527 except that the relevant condition of brightness, also shows the inventory of description document data, and the user is selected.
By like this, the user just can carry out and require corresponding visual processing.
In addition, input unit 527 can be identification user's a device.In this case, input unit 527 can be the camera that is used to discern the user, perhaps is used to make the device of user name input.
For example, by input unit 527, when the input user is child, select the description document data of the too much brightness variation of inhibition etc.
Like this, just, can realize visual processing according to the user.
【3】
Fig. 8 represents the testing result based on the lightness test section that detects from the brightness that is used for 2 kinds, the block diagram of the visual processing unit 530 that the description document data of logining in 2 dimension LUT4 are switched.
Visual processing unit 530, except that with the same structure of visual processing unit shown in Figure 11, possess description document data entry portion 531 in addition with description document data entry device 8 said functions.And then visual processing unit 530 possesses lightness test section 532.
Lightness test section 532 is made of spectral discrimination portion 522 and input unit 527.Spectral discrimination portion 522 and input unit 527 and use illustrated same of Fig. 6, Fig. 7.Like this, lightness test section 522 with input signal IS input, will be exported as testing result from the result of determination SA of spectral discrimination portion 522 with from the input results SA of input unit 527.
Description document data entry portion 531, input result of determination SA and input results SB based on result of determination SA and input results SB, switch output description document data PD.More particularly, be under the state of " becoming clear " for example, and input signal IS also is judged as when becoming clear at surround lighting.Selection is description document of the dynamic range compression of input signal IS etc., and exports as description document data PS.Like this, when showing output signal OS, can keep contrast.
By more than, in visual processing unit 530, just can realize suitable visual processing.
【4】
In the visual processing unit of Fig. 6~Fig. 8, each description document data entry portion, even with visual processing unit be not that one also can.Specifically, description document data entry portion even conduct possesses the server of a plurality of description document data or the multiple servers that conduct possesses each description document data, is connected with visual processing unit via network and also can.At this, the networking is the bindiny mechanism that can communicate by letter such as special circuit, public line, the Internet, LAN for example, both can be wired also can be wireless.And in this case, result of determination SA or input results SB convey to description document data entry portion side via same the Internet from visual processing unit side.
(7)
Illustrated in the above-described embodiment: description document data entry device 8, possess a plurality of description document data, switch by the login of subtend 2 dimension LUT4, thereby realize different visual processing.
At this, visual processing unit 1 also can possess a plurality of 2 dimension LUT of the description document data entry that realizes different visual processing.In this case, in visual processing unit 1,,, thereby realize different visual processing perhaps by the output of switching to each 2 dimension LUT by switching input to each 2 dimension LUT.
In this case, for 2 dimension LUT have increased the memory capacity that should guarantee, but can shorten the needed time of switching of visual processing.
And description document data entry device 8 also can be based on a plurality of description document data, generates new description document data, logins the device of the description document data that generated in 2 dimension LUT4.
About these, use Fig. 9~Figure 10 to be illustrated.
Fig. 9 is for about the block diagrams as the description document data entry device 701 main explanations of the variation of description document data entry device 8.Description document data entry device 701 is to be used for the 2 description document data of logining for LUT4 at visual processing unit 1 are carried out device for switching.
Description document data entry device 701, the control part 705 of being made the control of the parameter input part 706 of parameter that execution portion 703, input be used to generate new description document data and each one of carrying out by: the description document that login the description document data entry portion 702 of a plurality of description document data, generates new description document data based on a plurality of description document data constitutes.
In description document data entry portion 702, with description document data entry device 8 or same as each description document data entry portion of Fig. 6~shown in Figure 8, login a plurality of description document data, to by reading from the selected selection description document of the control signal C10 of control part 705 data.At this, read 2 from description document data entry portion 702 and select the description document data, select description document data d10 and the 2nd to select description document data d11 as the 1st respectively.
Output by parameter input part 706, and the description document data that decision is read from description document data entry portion 702, for example, in parameter input part 706, information that will be relevant with the visual environment of the visual treatment effect of hope, its degree of treatment, the image handled etc. are as parameter, by manually or by inputs automatically such as autobiography sensors.Control part 705 according to the parameter of being imported by parameter input part 706, is specified the description document data that should read by control signal C10, simultaneously by control signal c12, specifies the value of the synthetic degree of each description document data,
Description document is made execution portion 703, possesses description document generating unit 704, and it selects description document data d10 and the 2nd to select description document data d11 according to the 1st, makes the generation description document data d6 as new description document data.
Description document generating unit 704 obtains the 1st and selects description document data d10 and the 2nd to select description document data d11 from description document data entry portion 702.Further, from control part 705, obtain to specify each to select the control signal c12 of the synthetic degree of description document data.
Further, description document generating unit 704 is selected the value [m] of description document data d10 and the value [n] of the 2nd selection description document data d11, the value [k] of the synthetic degree of use control signal c12 appointment, the generation description document data d6 of making value [l] for the 1st.At this, value [l] is calculated by [l]=(1-k) * [m]+k * [n].In addition, [k] on duty satisfies under the situation of 0≤k≤1, and the 1st selects description document data d10 and the 2nd to select description document data d11 by interior branch; Satisfy as [k] under the situation of k<0 or k>1, the 1st selects description document data d10 and the 2nd to select description document data d11 to become is divided outward.
2 dimension LUT4 obtain the description document data d6 that description document generating units 704 generate, and the value that is obtained is kept in the specified address of the count signal c11 of control part 705.At this, generate description document data d6, and produce and generate the employed identical image signal value that each selects description document data to be associated of description document data d6 and be associated.
By more than, based on the description document data that for example realize different visual processing, and then just can make the new description document data that realize different visual processing.
Use Figure 10, describe at the visual processing description document manufacture method of in the visual processing unit that possesses description document data entry device 701, carrying out.
By count signal c10 from control part 705, with the address that certain computing cycle is specified description document data entry portion 702, in store image signal value (step S701) in the address that reads out in specified.At length, according to the parameter of having imported by parameter input part 706, control part 705 is exported count signal c10.Count signal c10 specifies in the address of 2 description document data of the different visual processing of realization in the description document data entry portion 702.Like this, just from description document data entry portion 702, read the 1st and select description document data d10 and the 2nd to select description document data d11.
Description document generating unit 704 obtains to specify the control signal c12 (step S702) of synthetic degree from control part 705.
Description document generating unit 704, select the value [m] of description document data d10 and the value [n] of the 2nd selection description document data d11 for the 1st, use the value [k] of the synthetic degree of control signal c12 appointment, the generation description document data d6 (step S703) of making value [l].At this, value [l] is calculated by [l]=(1-k) * [m]+k * [n].
Write generation description document data d6 (step S704) for 2 dimension LUT4.At this, write the address of destination, be by from for 2 the dimension control parts 705 that LUT4 provided count signal c11 specified.
Whether control part 705 has finished to judge (step S705) to the processing of all data of selected description document data, repeating step S70 to the processing of step S705 until end.
And the new description document data according to being kept at like this among the 2 dimension LUT4 are used for the execution of visual processing.
[effects of (7)]
In the visual processing unit that possesses description document data entry device 701, based on the description document data that realize different visual processing, and then make the new description document data that realize different visual processing, just can carry out visual processing.That is, in description document data entry portion 702, as long as possess the description document data of minority, just can realize the visual processing of any degree of treatment.Can reduce the memory capacity of description document data entry portion 702.
In addition, description document data entry device 701 not only possesses in visual processing unit 1 shown in Figure 1, also can possess in the visual processing unit of Fig. 6~Fig. 8.In this case, description document data entry portion 702 and description document are made execution portion 703, can replace Fig. 6~each description document data entry portion 521,526,531 shown in Figure 8 and use; Parameter input part 706 and control part 705, also can replace spectral discrimination portion 522, Fig. 7 of Fig. 6 input unit 527, Fig. 8 lightness test section 532 and use.
(8)
Visual processing unit also can be the device that the brightness of input signal IS is carried out conversion.Use Fig. 1, describe at the visual processing unit 901 of conversion brightness.
[formation]
Visual processing unit 901, it is the device of the brightness of conversion input signal IS ', by will for input signal IS ' carry out predetermined processing processing signals US ' output handling part 902 and use input signal IS ' and processing signals US ', carry out input signal IS ' conversion transformation component 903 and constitute.
Handling part 902.Move equally with spatial manipulation portion 2 (with reference to Fig. 1), carry out the spatial manipulation of input signal IS '.In addition, also can carry out the spatial manipulation that above-mentioned (variation) put down in writing.
Transformation component 903 similarly possesses 2 dimension LUT with visual handling part 3, based on input signal IS ' (value [x]) and processing signals US ' (value [z]), with output signal OS ' (value [y]) output.
At this, the value of each key element of the 2 dimension LUT that transformation component 903 possesses is for gain that value determined or side-play amount according to the relevant function f k (z) of the change degree degree of brightness, the value [x] of input signal IS ' is worked and determines.Below, will the function f k (z) relevant with the change degree degree of brightness, be called " change degree function ".
The value (value of=output signal OS ' [y]) of each key element of 2 dimension LUT, the function that is based on the value [z] of the value [x] of input signal IS ' and processing signals US ' determines.Below, this function is called " transforming function transformation function ", expression transforming function transformation function (a)~(d) is as an example.And Figure 12 (a)~(d) is illustrated in the relation of the input signal IS ' that makes under the situation that change degree function f k (z) changes and output signal OS '.
[about transforming function transformation function (a)]
Transforming function transformation function (a), expression [y]=f1 (z) * [x].
At this, change degree function f 1 (z) plays the gain effect of input signal IS '.Therefore,, change the gain of input signal IS ', change the value [y] of output signal OS ' by the value of change degree function f 1 (z).
Figure 12 (a), the variation of input signal IS ' the when value of expression change degree function f 1 (z) changes and the relation of output signal OS '.
Along with change degree function f 1 (z) becomes big (f1 (z)>1), it is big that the value of output signal [y] becomes.That is, image after changing becomes bright.On the other hand, along with change degree function f 1 (z) diminishes (f1 (z)<1), the value of output signal [y] diminishes.That is image deepening after changing.
At this, change degree function f 1 (z) is the function of the minimum value value of being not less than [0] in value [z] domain of definition.
And,, when the scope of the value of being got, also can limit (clip) in the scope of the value of being got above the value [y] of output signal by the computing of transforming function transformation function (a).For example, when exceedance [l], the value of output signal [y] can be by value [l] restriction, and when less than value [0], the value of output signal [y] can be by value [0] restriction.This is with same about following transforming function transformation function (b)~(d).
[about transforming function transformation function (b)]
At this, change degree function f 2 (z) plays the side-play amount effect of input signal IS '.Therefore,, change the side-play amount of input signal IS ', change the value [y] of output signal OS ' by the value of change degree function f 2 (z).
Figure 12 (b), the variation of relation between input signal IS ' the when value of expression change degree function f 2 (z) changes and the output signal OS '.
Along with change degree function f 2 (z) becomes big (f2 (z)>0), it is big that the value of output signal [y] becomes.That is, image after changing is bright.On the other hand, along with change degree function f 2 (z) diminishes (f2 (z)<0), the value of output signal [y] diminishes.That is image deepening after changing.
[about transforming function transformation function (c)]
Transforming function transformation function (c), expression [y]=f1 (z) * [x]+f2 (z).
At this, change degree function f 1 (z) is as the gain generation effect of input signal IS '.And change degree function f 2 (z) is as the side-play amount generation effect of input signal IS '.Therefore, by the value of change degree function f 1 (z), change the gain of input signal IS ', pass through the value of change degree function f 2 (z) simultaneously, thereby change the side-play amount of input signal IS, the value of output signal OS ' [y] changes.
Figure 12 (c), the variation of input signal IS ' under the situation that the value of expression change degree function f 1 (z) and change degree function f 2 (z) changes and the relation of output signal OS '.
Along with change degree function f 1 (z) and change degree function f 2 (z) become big, it is big that the value of output signal [y] becomes.Be the image after the conversion, become bright.On the other hand, along with change degree function f 1 (z) and change degree function f 2 (z) diminish, the value of output signal [y] diminishes.Be the image after the conversion, deepening.
[about transforming function transformation function (d)]
Transforming function transformation function (d), expression [y]=[x] ^ (1-f2 (z)).
At this, change degree function f 2 (z), decision " power function " " power ".Therefore,, thereby change input signal IS ', change the value [y] of output signal OS ' by the value of change degree function f 2 (x).
The variation of input signal IS ' the when value of Figure 12 (d) expression change degree function f 2 (z) changes and the relation of output signal OS '.
Along with change degree function f 2 (z) becomes big (f2 (z)>0), it is big that the value of output signal [y] becomes.That is, image after changing becomes bright.On the other hand, along with change degree function f 2 (z) diminishes (f2 (z)<1), the value of output signal [y] diminishes.That is image after changing, deepening.And, when change degree function f 2 (z) value is [0], do not carry out the conversion of low-frequency input signal IS '.
In addition, value [x] is the value value after the normalization in the scope of [0]~[1] that makes input signal IS '.
[effect]
(1)
In visual processing unit 901, by having the 2 dimension LUT that use the key element that any determined in the transforming function transformation function (a)~(d) shown in above, thereby carry out the visual processing of input signal IS '.Each key element of 2 dimension LUT, in store value [x] and the corresponding value [y] of value [z].Therefore, based on input signal IS ' and processing signals US ', realize the brightness of input signal IS ' is carried out the visual angle processing of conversion.
(2)
At this, when change degree function f 1 (z) and change degree function f 2 (z) are the dull function that reduces, further obtain backlight and proofread and correct or prevent effects such as anti-white.Be illustrated about these.
Figure 13 (a)~(b), a dull change degree function f 1 (z) that reduces of expression and the example of f2 (z) are though represent 3 curves (a1~a3, b1~b3), all be the dull example that reduces respectively.
Change degree function f 1 (z) is the function with codomain of the value of striding across [l], is the zone of minimum value for domain of definition value of being not less than [0] of value [z].Change degree zone f2 (z) is the function with zone of the value of striding across [0].
For example, the dark and big part of area in image, [z] is less for the value of processing signals US '.The value of change degree function becomes big with respect to less value [z].That is, tie up LUT, then become bright than dark and the bigger part of area in the image if use based on 2 of transforming function transformation function (a)~(d) made.Thereby, for example by reversible-light shooting image in, for dark and the bigger part of area, carry out the improvement of dark portion, improve visual effect.
And, for example, the bright and big part of area in image, it is big that the value of processing signals US ' [z] becomes.The value of the change degree function that bigger value [z] is corresponding diminishes.That is, tie up LUT, the then bright and big part deepening of area in the image if use based on 2 of change function (a)~(d) made.Thereby, for example in image, carry out anti-white improvement for bright and the big part of area with lights such as blank, improve visual effect.
[variation]
(1)
Above-mentioned transforming function transformation function is an example, if having ejusdem generis conversion, then function all can arbitrarily.
(2)
The value of each key element of 2 dimension LUT can strict determine by above-mentioned transforming function transformation function.
For example, when the value of above-mentioned transforming function transformation function, surpass under the situation of scope of the value that can handle as output signal OS ', 2 dimension LUT can preserve the value that scope limited that is used as the accessible value of output signal OS '.
(3)
With above-mentioned same processing, can not use 2 dimension LUT to carry out.For example, transformation component 903 can by operation transform function (a)~(d), be exported output signal OS ' for input signal IS ' and processing signals US '.
(9)
Visual processing unit possesses a plurality of spatial manipulation portion, but the different a plurality of unsharp signals of degree that usage space is handled carry out visual processing.
[formation]
Figure 14 represents the formation of visual processing unit 905.Visual processing unit 905 is to carry out input signal IS " the device of visual processing, its formation comprises: the 1st handling part 906a, it is for input signal IS " carry out the 1st predetermined process, the 1st processing signals U1 is exported; The 2nd handling part 906b, it carries out the 2nd predetermined process for input signal IS ', and the 2nd processing signals U2 is exported; With transformation component 908, it uses input signal IS " and the 1st processing signals U 1 and the 2nd processing signals U2, carry out input signal IS " conversion.
The 1st handling part 906a and the 2nd handling part 906b move equally with spatial manipulation portion 2 (with reference to Fig. 1), carry out input signal IS " spatial manipulation.In addition, also can carry out the spatial manipulation put down in writing by in above-mentioned (variation) (3).
At this, the 1st handling part 906a and the 2nd handling part 906b are with varying in size of the zone of employed neighboring pixel in spatial manipulation.
Specifically, relatively in the 1st handling part 906a, use with concerned pixel and be the neighboring pixel that zone comprised (less unsharp signal) of center in vertical 30 pixels, horizontal 30 pixels, relative therewith, in the 2nd handling part 906a, use with concerned pixel to be the neighboring pixel that zone comprised (bigger unsharp signal) of center in vertical 90 pixels, horizontal 90 pixels.In addition, in the zone of this neighboring pixel of putting down in writing, nothing but an example, and therewith non-limiting.In order to give full play to visual treatment effect, preferably generate unsharp signal from sizable zone.
Transformation component 908 possesses LUT, based on input signal IS " (value [x]) and the 1st processing signals U1 (value [z1]) and the 2nd processing signals U2 (value [z2]), with output signal OS " (value [y]) output.
At this, the LUT that transformation component 903 possesses is 3 dimension LUT, preserves input signal IS " value [x] and the output signal OS of value [z2] correspondence of the value [z1] of the 1st processing signals U1 and the 2nd processing signals U2 " value [y].The value (=output signal OS " value [y]) of each key element of this 3 dimension LUT, the function that is based between the value [z2] of the value [z1] of the value [x] of input signal IS ' and the 1st processing signals U1 and the 2nd processing signals U2 determines.
This 3 dimension LUT, though can realize above-mentioned execution mode and the described processing of following execution mode, but, 3 dimension LUT are illustrated about [to input signal IS " the shading value situation of carrying out conversion] and [to input signal IS " strengthen the situation of conversion] at this.
[to input signal IS " the shading value situation of carrying out conversion]
Transformation component 908, " the z1 mode that becomes bright that " diminishes, then make input signal IS " is carried out conversion according to the value of the 1st processing signals U1.But,, then suppress bright degree if the value of the 2nd processing signals U2 [z2] also diminishes.
As an example of such conversion, the value of each key element of the 3 dimension LUT that transformation component 903 possesses is based on ensuing transforming function transformation function (e) or (f) and decision.
(about transforming function transformation function (e))
Transforming function transformation function (e), expression [y]=[f11 (z1)/f12 (z2)] * [x].
At this, change degree function f 11 (z1), f12 (z2) are and the same function of above-mentioned (variation) (8) described change degree function f 1 (z).And change degree function f 11 (z1) and change degree function f 12 (z2) are different functions.
Like this, [f11 (z1)/f12 (z2)] plays input signal IS " effect of gain, by the value of the 1st processing signals U1 and the value of the 2nd processing signals U2, change input signal IS " gain, change output signal OS " value [y]
(about transforming function transformation function (f))
Transforming function transformation function (f) is expressed as [y]=[x]+f21 (z1)-f22 (z2).
At this, change degree function f 21 (z1), f22 (z2) are and the identical function of above-mentioned (variation) (8) described change degree function f 2 (z).And change degree function f 21 (z1) is different functions with change degree function f 22 (z2).
Like this, [f21 (z1)-f22 (z2)] plays input signal IS " the side-play amount effect, by the value of the 1st processing signals U1 and the value of the 2nd processing signals U2, thereby change input signal IS " side-play amount, change output signal OS " value " y ".
(effect)
By using the conversion of such transforming function transformation function (e)~(f), become clear the effect that the while only makes the dark portion in zone greatly of night scene image brighten thereby can realize for example making backlight dark portion partly to become than the zonule.
(variation)
In addition, the processing in the transformation component 908 is not limited to use the processing of 3 dimension LUT, also can carry out and transforming function transformation function (e) or same computing such as (f).
And each key element of 3 dimension LUT can strict determine based on transforming function transformation function (e) or (f).
[to input signal IS " strengthen the situation of conversion]
Conversion in transformation component 908 is for to input signal IS " during the conversion strengthened, can strengthen separately a plurality of frequency contents.
For example,, then the reinforcement of the higher deep or light part of frequency can be carried out,, then the reinforcement of the low deep or light part of frequency can be carried out if carry out conversion that the 2nd processing signals U2 is strengthened if carry out conversion that the 1st processing signals U1 is strengthened.
(description document data)
Visual processing unit 1, except that above-mentioned illustrated, also can possess the description document data that realize various visual processing.Below, expression is at the 1st~the 7th description document data that realize various visual processing, is the formula of feature and the formation of visual processing unit of visual processing that realizes and possess visual processing unit 1 equivalence of these description document data with the description document data.
Each description document data are based on that the mathematical expression that comprises the computing that the value of being calculated according to input signal IS and unsharp signal US is strengthened determines.At this, the so-called computing of strengthening is meant the computing of being undertaken by for example nonlinear reinforcement function.
By like this, just can be implemented in each description document data, exist input signal IS visual characteristic reinforcement or have reinforcement with the nonlinear characteristic of the machine of output signal OS output.
(1)
[the 1st description document data]
The 1st description document data are for input signal IS and unsharp signal US, determine based on the computing that comprises the function that the difference of each transformed value after the conversion of having carried out regulation is strengthened.Like this, with input signal IS with after unsharp signal US is transformed into different spaces, just can strengthen the difference of each.Like this, just, can realize reinforcement that for example exists visual characteristic etc.
Below specifically describe.
The value C of each key element of the 1st description document data (value of output signal OS), use value B, transforming function transformation function F1, the transforming function transformation function of value A, the unsharp signal US of input signal IS inverse transform function F2, strengthen function F 3, be expressed as C=F2 (F1 (A)+F3 (F1 (A)-F1 (B)) (below be called formula M1).
At this, transforming function transformation function F1 is the common logarithm function.Inverse transform function F2 is the contrafunctional exponential function (antilogarithm) as the common logarithm function.Strengthen function F 3 and be to use any function among reinforcement function R 1~R3 that Figure 109 illustrates.
[the visual processing unit 11 of equivalence]
Figure 15 represents the visual processing unit 1 with visual processing unit 1 equivalence of login after the 1st description document data in 2 dimension LUT4.
The computing that the difference of each transformed value of visual processing unit 11 after based on the regulation of conversion carried out to(for) input signal IS and unsharp signal US is strengthened is with the device of output signal OS output.Like this, with input signal IS with after unsharp signal US is transformed into different spaces, just can strengthen the poor of each, for example, can realize reinforcement that exists visual characteristic etc.
Visual processing unit 11 as shown in figure 15 possesses: spatial manipulation portion 12, and its brightness value by each pixel of the original image that is obtained as input signal IS is carried out spatial manipulation, output unsharp signal US; With visual handling part 13, it uses input signal IS and unsharp signal US to carry out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 12 owing to carry out the same action of the spatial manipulation portion that possesses with visual processing unit 12, therefore omits explanation.
Visual handling part 13 possesses: signal space transformation component 14, and it carries out conversion to the signal space between input signal IS and the unsharp signal US, with conversion input signal TIS and conversion unsharp signal TUS output; Subtraction portion 17, its with conversion input signal TIS as the 1st input, with conversion unsharp signal TUS as the 2nd input, will be as the difference signal DS output of difference of each; Intensive treatment portion 18, its output is carried out intensive treatment signal TS after the intensive treatment with difference signal DS as input; Addition portion 19, as the 1st input, as the 2nd input, output is with the additive signal PS after both additions with intensive treatment signal TS with conversion input signal TIS for it; With inverse transformation portion 20, its output is with the output signal OS of additive signal PS as input.
Signal space transformation component 14, it also has: the 1st transformation component 15, its with input signal IS as the input, output transform input signal TIS; With the 2nd transformation component 16, its with unsharp signal US as the input, output transform unsharp signal TUS.
[effect of the visual processing unit 11 of equivalence]
The 1st transformation component 15 uses transforming function transformation function F1, with the conversion input signal TIS of the input signal value of being transformed into F1 (A) of value A.The 2nd transformation component 16 uses transforming function transformation function F1, with the conversion unsharp signal TUS of the unsharp signal US value of being transformed into F1 (B) of value B.Subtraction portion 17, is calculated with the difference of the conversion unsharp signal TUS of value F1 (B) its conversion input signal TIS to value F1 (A), and the difference signal DS that will be worth F1 (A)-F1 (B) exports.Intensive treatment portion 18 uses and strengthens function F 3, according to the difference signal DS of value F1 (A)-F1 (B), will be worth the intensive treatment signal TS output of F3 (F1 (A)-F1 (B)).Addition portion 19, it is to the intensive treatment signal TS addition of conversion input signal TIS and the value F3 (F1 (A)-F1 (B)) of value F1 (A), the additive signal PS of output valve F1 (A)+F3 (F1 (A)-F1 (B)).Inverse transformation portion 20 uses inverse transform function F2, and the additive signal PS of value F1 (A)+F3 (F1 (A)-F1 (B)) is carried out inverse transformation, and the output signal OS that will be worth F2 (F1 (A)+F3 (F1 (A)-F1 (B))) exports.
In addition, use transforming function transformation function F1, inverse transform function F2, strengthen the calculating of function F 3, can be to use 1 dimension LUT for each function to carry out, also can not be to use LUT to carry out.
[effect]
As visual processing unit 1 that possesses the 1st description document data and visual processing unit 11, realize the effect of same visual processing.
By transforming function transformation function F1, realize using being transformed into to the visual processing behind conversion input signal TIS behind the number space and the conversion unsharp signal TUS.Human visual characteristic is a logarithm, by being transformed into the number space and handling, thereby realizes being suitable for the visual processing of visual characteristic.
(ii)
In various visual processing unit, realize the contrast in the number space is strengthened.
Visual in the past processing unit 400 shown in Figure 108 is generally in order to use the little unsharp signal US of fog-level to be described file (edge) reinforcement and to use.But, visual processing unit 400, when using the big unsharp signal US degree of comparing of fog-level to strengthen, the bright portion of original image become strengthen less than, dark portion becomes and too much strengthens, and becomes the processing that is unsuitable for visual characteristic.That is, to the correction that becomes bright direction, tend to strengthen less than; To the correction of deepening direction, tend to too much reinforcement.
On the other hand, using visual processing unit 1 or visual processing unit 11 to carry out under the situation of visual processing, can carry out till dark portion to bright portion, being suitable for the visual processing of visual characteristic, but balance becomes the reinforcement of bright direction and the reinforcement of deepening direction well.
(iii)
In visual processing unit 400 in the past, the output signal OS after the visual in some cases processing becomes negative, breaks down.
On the other hand, when the value C by certain key element of the description document data that M1 obtained surpassed the scope of 0≤C≤255, the value by making this key element is as 0 or 255, and was negative thereby the picture element signal after can preventing to proofread and correct becomes, and breaks down perhaps saturated generation fault.No matter this is long how can both the realization in position that is used to show the key element of description document data.
[variation]
Transforming function transformation function F1 is not limited to logarithmic function.For example also can be, make transforming function transformation function F1, for making inverse transform function F2, for making conversion for the conversion of the gamma correction (for example removing gamma factor (0.45)) of input signal IS for the gamma correction generation effect of input signal IS.
Like this, will remove, just can under linear characteristic, handle for the gamma correction of input signal IS.Therefore, can carry out the correction of optical dimming.
(ii)
In visual processing unit 11, visual handling part 13 based on input signal IS and unsharp signal US, does not use 2 dimension LUT4 just can carry out the above-mentioned formula M1 of computing.In this case, in each function F 1~F3, use 1 dimension LUT just can.
(2)
[the 2nd description document data]
The 2nd description document data are based on and comprise the function that the ratio of input signal IS and unsharp signal US is strengthened and determine in interior computing.Like this, just, can realize visual processing of for example strengthening the passivation composition etc.
Further, the 2nd description document data are based on the computing that ratio for input signal IS after strengthening and unsharp signal US carries out dynamic range compression and determine.Like this, Yi Bian just can realize for example strengthening the passivation composition, Yi Bian carry out the visual processing etc. of the compression of dynamic range.
Below specifically describe.
The value C of each key element of the 2nd passivation data (value of output signal OS), use value A, the unsharp signal US of input signal IS value B, dynamic range compression function F 4, strengthen function F 5, be expressed as C=F4 (A) * F5 (A/B) (below be called formula M2).
At this, dynamic range compression function F 4 is the monotone increasing functions such as power function that for example raise up.For example, be expressed as F4 (x)=x^ γ (0<γ<1).Strengthening function F 5 is power functions.For example, be expressed as F5 (x)=x^ α (0<α≤1).
[the visual processing unit 21 of equivalence]
Figure 16, the visual processing unit 21 of expression and visual processing unit 1 equivalence after login the 2nd description document data in 2 dimension LUT4.
Visual processing unit 21 is based on the computing that the ratio of input signal IS and unsharp signal US is strengthened, with the device of output signal OS output.Like this, just, can realize visual processing of for example the passivation composition being strengthened etc.
In addition, visual processing unit 21 based on the input signal IS after strengthening is carried out the computing of dynamic range compression with the ratio of unsharp signal US, is exported output signal OS.Like this, Yi Bian just can realize for example strengthening the passivation composition, Yi Bian carry out the visual processing etc. of the compression of dynamic range.
Visual processing unit 21 as shown in figure 16 possesses: by the brightness value of each pixel of the original image that is obtained as input signal IS, carry out spatial manipulation, unsharp signal US is exported; Visual handling part 23, it uses input signal IS and unsharp signal US, carries out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 23 owing to carry out the same action of the spatial manipulation portion that possessed with visual processing unit 12, therefore omits explanation.
Visual handling part 23 possesses: division portion 25, and it as the 1st input, as the 2nd input, will be exported unsharp signal US input signal IS with the division signal RS of input signal IS after divided by unsharp signal US; Intensive treatment portion 26, its with division signal RS as input, with intensive treatment signal TS as output; Output processing part 27, it as the 1st input, as the 2nd input, is exported intensive treatment signal TS input signal IS with output signal OS.Output processing part 27 possesses: DR compression unit 28, and it as input, is exported input signal IS with the DR compressed signal DRS after dynamic range (DR) compression; With multiplier 29, it as the 1st input, as the 2nd input, is exported intensive treatment signal TS DR compressed signal DRS with output signal OS.
[effect of the visual processing unit 21 of equivalence]
Action at visual handling part 23 further describes.
Division portion 25 will be worth the unsharp signal US of the input signal IS of A divided by value B, will be worth the division signal RS output of A/B.Intensive treatment portion 26, it uses strengthens function F 5, according to the division signal RS of value A/B, will be worth the intensive treatment signal TS output of F5 (A/B).DR compression unit 28, it uses dynamic range compression function F 4, according to the input signal IS of value A, will be worth the DR compressed signal DRS output of F4 (A).Multiplier 29 will be worth the DR compressed signal DRS of F4 (A) and the intensive treatment signal TS of value F5 (A/B) and multiply each other, and will be worth the output signal OS output of F4 (A) * F5 (A/B).
In addition, use the calculating of dynamic range compression function F 4, reinforcement function F 5, can use 1 dimension LUT to carry out, also can not use LUT to carry out for each function.
[effect]
Possess the visual processing unit 1 and the visual processing unit 21 of the 2nd description document data, realize identical visual treatment effect.
(i)
In the past, when the dynamic range of integral image is compressed, use dynamic range compression function F 4 as shown in figure 17, to grey level compress make from dark portion unsaturated till highlighted.That is, the black level of the reproduction target in the picture signal before the order compression is L0, and the maximum white level of order is L1, and the dynamic range L1 before the compression: L0 then is compressed to the dynamic range Q1 after the compression: Q0.But as the contrast of the ratio of video level, the compression by dynamic range drops to (Q1/Q0) * (L0/L1) doubly.At this, dynamic range compression function F 4 is the power functions that raise up.
On the other hand, in visual processing unit 1 that possesses the 2nd description document data and visual processing unit 21, the division signal RS of value A/B, promptly carry out intensive treatment by strengthening 5 pairs of clear signals of function F, DRS multiplies each other with the DR compressed signal.Therefore, strengthen local contrast.At this, strengthen function F 5, be power function (F5 (x)=x^ α) as shown in figure 18, when the value of division signal RS is strengthened a bright side greater than 1 the time, when a dark side being strengthened less than 1 the time.
Generally, human vision is kept local contrast if exist, even then whole contrast reduces, also can see the character of identical contrast.Like this, in visual processing unit 1 that possesses the 2nd description document data and visual processing unit 21, Yi Bian can realize carrying out the compression of dynamic range, Yi Bian the visual processing that does not visually allow contrast reduce.
Further bright specifically effect of the present invention.
Dynamic range compression function F 4 is F4 (x)=x^ γ (for example γ=0.6).And, strengthen function F 5, be F5 (x)=x^ α (for example α=0.4).And the black level of the reproduction target of order during with maximum white level value of being normalized to 1 of input signal IS is for being worth 1/300.That is, the dynamic range of input signal IS is 300: 1.
Using dynamic range compression function F 4, compressing under the situation of dynamic range of this input signal IS, the dynamic range after the compression becomes F4 (1): F4 (1/300)=30: 1.That is, beg for scope compression function F4 by moving, dynamic range is compressed to 1/10.
On the other hand, the value C of output signal OS is by above-mentioned formula 2 expressions, C=(A^0.6) * ((A/B^0.4)), i.e. C=A/ (B^0.4).At this, because in subrange, the value of looking B is constant, so C and A are proportional.That is, the variable quantity of value C becomes 1 with the ratio of the variable quantity of value A, and in input signal IS and output signal OS, local contrast does not change.
With above-mentioned same, human vision, if having the local contrast of keeping, even then overall contrast descends and also can see the character of identical contrast.Like this, in visual processing unit 1 that possesses the 2nd description document data and visual processing unit 21, Yi Bian can realize carrying out the compression of dynamic range, Yi Bian the visual processing that does not visually allow contrast reduce.
In addition, if the power multiplier α that makes reinforcement function F 5 as shown in figure 18 greater than 0.4, Yi Bian then also can realize carrying out the compression of dynamic range, Yi Bian improve the outward appearance contrast of output signal OS more than input signal IS.
(iii)
Among the present invention, in order to realize above effect, and especially effective under ensuing situation.That is,, just can reproduce the high image of all good contrast of dark portion or light and shade with the little display of the dynamic range of physics.In addition for example, can realize showing the image that contrast is high, obtain the high printed matter of contrast with the low printing ink of concentration (printing ink of light color) with the tv projector under bright environment.
[variation]
(i)
In visual processing unit 21, visual handling part 23 based on input signal IS and unsharp signal US, but does not use the just above-mentioned formula M2 of computing of 2 dimension LUT4.In this case, in the calculating of each function F 4, F5, even do not use 1 dimension LUT also can.
(ii)
In addition, when the value C of certain key element of the description document data of being obtained by formula M2 was C>255, the value of this key element can be used as 255.
(3)
[the 3rd description document data]
The 3rd description document data are based on and comprise the function that the ratio of input signal IS and unsharp signal US is strengthened and determine in interior computing.Like this, just, can realize visual processing of for example the passivation composition being strengthened etc.
Below specifically describe.
In the formula M2 of above-mentioned the 2nd description document data, dynamic range compression function F 4 also can be the direct proportion function of proportionality coefficient 1.In this case, the value C of each key element of the 3rd description document data (value of output signal OS), use value A, the unsharp signal US of input signal IS value B, strengthen function F 5, be expressed as C=A * F5 (A/B) (below be called formula M3).
[the visual processing unit 31 of equivalence]
Figure 19 represents the visual processing unit 31 with visual processing unit 1 equivalence of login the 3rd description document data in 2 dimension LUT.
Visual processing unit 31 is based on the computing that the ratio of input signal IS and unsharp signal US is strengthened, with the device of output signal OS output.Like this, just, can realize visual processing of for example the passivation composition being strengthened etc.
Visual processing unit 31 as shown in figure 19, do not possess DR compression unit 28 aspect, different with visual processing unit 21 shown in Figure 16.Below, in the visual processing unit 31 as shown in figure 19, about with the part of the same action of visual processing unit 21 as shown in figure 16, additional identical symbol, detailed.
Visual processing unit 31 possesses: spatial manipulation portion 22, and it carries out spatial manipulation by the brightness value of each pixel of the original image that is obtained as input signal IS, and unsharp signal US is exported; With visual handling part 32, it uses input signal IS and unsharp signal US, carries out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 22 owing to carry out the same action of the spatial manipulation portion that possesses with visual processing unit 12, therefore omits explanation.
Visual handling part 32 possesses: division portion 25, and it as the 1st input, as the 2nd input, will remove division signal RS output behind the unsharp signal US with unsharp signal US with input signal IS with input signal IS; Intensive treatment portion 26, its with division signal RS as input, with intensive treatment signal TS as output; With multiplier 33, it as the 1st input, as the 2nd input, is exported intensive treatment signal TS input signal IS with output signal OS.
[effect of the visual processing unit 31 of equivalence]
Division portion 25 and intensive treatment portion 26, carry out with about the illustrated same action of visual processing unit shown in Figure 16 21.
Multiplier 33, it will be worth the input signal IS of A and the intensive treatment signal TS of value F5 (A/B) multiplies each other, and will be worth the output signal OS output of A * F5 (A/B).At this, strengthen function F 5, with same as shown in figure 18.
In addition, use to strengthen the calculating of function F 5, and at illustrated same of visual processing unit 21 as shown in figure 16, can use 1 dimension LUT to carry out, also can not use LUT for each function.
[effect]
Possess the visual processing unit 1 and the visual processing unit 31 of the 3rd description document data, realize same visual treatment effect.
(i)
In intensive treatment portion 26, carry out intensive treatment as the input signal IS unsharp signal represented (division signal RS) with the ratio of unsharp signal US, the clear signal and the input signal IS that have strengthened multiply each other.To carrying out intensive treatment, be equivalent to calculate to the input signal IS of number space and the difference between the unsharp signal US as the represented unsharp signal of input signal IS and the ratio of unsharp signal US.That is, realize being suitable for the mankind's of logarithm the visual processing of visual characteristic.
(ii)
By strengthening the amount of reinforcement that function F 5 produces, when bright () becomes big when input signal IS is big, input signal IS hour when very dark () diminish.And, to the amount of reinforcement that becomes bright direction, than big to the amount of reinforcement of deepening direction.Therefore, can realize being suitable for the visual processing of visual characteristic, realize the good visual processing of balance.
(iii)
In addition, be C>255 o'clock at the value C of certain key element of the description document data of being obtained by formula M3, the value C of this key element can be used as 255.
(iv)
In the processing of the formula of use M3, do not impose compression for the dynamic range of input signal IS, local contrast can be strengthened, but local contrast can be strengthened, can visually can carry out the compression and the expansion of dynamic range.
(4)
(the 4th description document data)
The 4th description document data are based on and comprise that the function the difference between input signal IS and the unsharp signal US strengthened according to the value of input signal IS determines in interior computing.Like this, for example, can be according to the value of input signal IS, the passivation of input signal IS become to grade strengthen.Therefore, can carry out suitable reinforcement from dark portion to the bright portion of input signal IS.
Further, the 4th description document data are based on for the value after strengthening, and add that the computing that input signal IS is carried out the value after the dynamic range compression determines.Like this, just can grade Yi Bian strengthen the clear one-tenth of input signal IS, Yi Bian carry out the compression of dynamic range according to the value of input signal IS.
Below specifically describe.
The value C of each key element of the 4th description document data (value of output signal OS), use value B, the amount of reinforcement of value A, the unsharp signal US of input signal IS to adjust function F 6, strengthen function F 7, dynamic range compression function F 8, be expressed as C=F8 (A)+F6 (A) * F7 (A-B) (below be called formula M4).
At this, amount of reinforcement is adjusted function F 6, is the dull function that increases of value of relative input signal IS.That is, when the value A of input signal IS hour, the value that amount of reinforcement is adjusted function F 6 also diminishes; When the value A of input signal IS became big, the value that amount of reinforcement is adjusted function F 6 also became big.Strengthen function F 7, be to use the arbitrary function among the illustrated reinforcement function R 1~R3 of Figure 109.Dynamic range compression function F 8 is to use the illustrated power function of Figure 17, is expressed as F8 (x)=x^ γ (0<γ<1).
[the visual processing unit 41 of equivalence]
Figure 20, the visual processing unit 41 of expression and visual processing unit 1 equivalence after login the 4th description document data in 2 dimension LUT4.
Visual processing unit 41 is the values according to input signal IS, based on the computing that the difference between input signal IS and the unsharp signal US is strengthened, with the device of output signal OS output.Like this, for example just can grade to the clear one-tenth of input signal IS and strengthen according to the value of input signal IS.Therefore, just can carry out suitable reinforcement from dark portion to the bright portion of input signal IS.
And then visual processing unit 41 based on for the value after strengthening, with the computing that input signal IS adds the value after the dynamic range compression, is exported output signal OS.Like this, on one side just can become to grade according to the passivation that the value of input signal IS is strengthened input signal IS, Yi Bian carry out the compression of dynamic range.
Visual processing unit 41 as shown in figure 20 possesses: spatial manipulation portion 42, and it carries out spatial manipulation by the brightness value of each pixel of the original image that is obtained as input signal IS, and unsharp signal US is exported; With visual handling part 43, it uses input signal IS and unsharp signal US, carries out the visual processing of original image, and output signal U S is exported.
Spatial manipulation portion 42 owing to carry out the same action of the spatial manipulation portion that possesses with visual processing unit 12, therefore omits explanation.
Visual handling part 43 possesses: subtraction portion 44, and it as the 1st input, as the 2nd input, will be exported unsharp signal US input signal IS as the difference signal DS of each difference; Intensive treatment portion 45, it as input, is exported difference signal DS with intensive treatment signal TS; Amount of reinforcement adjustment part 46, it as input, adjusts signal IS output with amount of reinforcement with input signal IS; Multiplier 47, it adjusts signal IC as the 1st input with amount of reinforcement, with intensive treatment signal TS as the 2nd input, the multiplying signal MS after output is adjusted amount of reinforcement signal IC and intensive treatment signal TS and multiplied each other; With output processing part 48, it as the 1st input, as the 2nd input, is exported multiplying signal MS input signal IS with output signal OS.Output processing part 48 possesses: DR compression unit 49, and it as input, is exported input signal IS with the DR compressed signal DRS after dynamic range (DR) compression; With addition portion 50, it as the 1st input, as the 2nd input, is exported multiplying signal MS DR compressed signal DRS with output signal OS.
[effect of the visual processing unit 41 of equivalence]
Action about visual handling part 43 further is illustrated.
Subtraction portion 44, the difference between the unsharp signal US of the input signal IS of its calculated value A and value B is with the difference signal DS output of value A-B.Intensive treatment portion 45, it uses strengthens function F 7, according to the difference signal DS of value A-B, will be worth the intensive treatment signal TS output of F7 (A-B).Amount of reinforcement adjustment part 46, it uses amount of reinforcement to adjust function F 6, and according to the input signal IS of value A, the amount of reinforcement that will be worth F6 (A) is adjusted signal IC output.Multiplier 47, the intensive treatment signal TS that its amount of reinforcement that will be worth F6 (A) is adjusted signal IC and value F7 (A-B) multiplies each other, and will be worth the multiplying signal MS output of F6 (A) * F7 (A-B).DR compression unit 49, it uses dynamic range compression function F 8, according to the input signal IS of value A, will be worth the DR compressed signal DRS output of F8 (A).Addition portion 50, its with DR compressed signal DRS, with the multiplying signal MS addition of value F6 (A) * F7 (A-B), the output signal OS that will be worth F8 (A)+F6 (A) * F7 (A-B) exports.
In addition, use amount of reinforcement to adjust function F 6, strengthen the calculating of function F 7, dynamic range compression function F 8, be to use 1 dimension LUT for each function to carry out, also can not use LUT to carry out.
[effect]
Possess the visual processing unit 1 and the visual processing unit 41 of the 4th description document data, realize identical visual treatment effect.
(i)
According to the value A of input signal IS, carry out the adjustment of the amount of reinforcement of difference signal DS.Therefore, on one side can carry out dynamic range compression, Yi Bian keep local contrast from dark portion to bright portion.
(ii)
Though amount of reinforcement is adjusted function F 6, be the dull function that increases, yet be that the value A of input signal IS is big more, the function that the recruitment of the value of function reduces more.In this case, the value that prevents output signal OS produces saturated.
(iii)
Strengthening function F 7, when being to use the illustrated reinforcement function R 2 of Figure 109, the amount of reinforcement the when absolute value that can suppress difference signal DS becomes big.Therefore, prevent that the amount of reinforcement generation of the part that definition is high is saturated, visually also can carry out the visual processes of nature.
[variation]
(i)
In visual processing unit 41, visual handling part 43 based on input signal IS and unsharp signal US, but does not use the yet above-mentioned formula M4 of computing of 2 dimension LUT4.In this case, in the calculating of each function F 6~F8, can use 1 dimension LUT.
(ii)
When reinforcement function F 7 is the direct proportion function of proportionality coefficient 1, do not need to be provided with especially intensive treatment portion 45.
(iii)
In addition, when the value C of certain key element of the description document data of being obtained by formula M4 surpassed the scope of 0≤C≤255, the value C that can make this key element was 0 or 255.
(5)
[the 5th description document data]
The 5th description document data are based on the value that comprises according to input signal IS, and the function that the difference between input signal IS and the unsharp signal US is strengthened determines in interior computing.Like this, for example just can grade and strengthen, therefore, can carry out suitable reinforcement from dark portion to the bright portion of input signal IS to the clear one-tenth of input signal IS according to the value of input signal IS.
Below specifically describe.
In the formula M4 of above-mentioned the 4th description document data, dynamic range compression function F 8 can be the direct proportion function of proportionality coefficient 1.In this case, the value C of each key element of the 5th description document data (value of output signal OS), use value B, the amount of reinforcement of value A, the unsharp signal US of input signal IS to adjust function F 6, strengthen function F 7, be expressed as C=A+F6 (A) * F7 (A-B) (below be called formula M5).
[the visual processing unit 51 of equivalence]
Figure 21, the visual processing unit 51 of expression and visual processing unit 1 equivalence after login the 5th description document data in 2 dimension LUT4.
Visual processing unit 51 is based on the value according to input signal IS, and the computing that the difference between input signal IS and the unsharp signal US is strengthened is with the device of output signal OS output.Like this, for example can grade to the clear one-tenth of input signal IS and strengthen according to the value of input signal IS.Therefore, the dark portion from input signal IS can carry out suitable reinforcement till bright portion.
Visual processing unit 51 as shown in figure 21, not possessing DR compression unit 49 these points, different with visual processing unit 41 shown in Figure 20.Below, in visual processing unit 51 as shown in figure 21, about carrying out the part with the same action of visual processing unit 41 as shown in figure 20, additional identical symbol, detailed.
Visual processing unit 51 possesses: spatial manipulation portion 42, and it carries out spatial manipulation to the brightness value of each pixel of the original image that obtained as input signal IS, and unsharp signal US is exported; With visual handling part 52, it uses input signal IS and unsharp signal US, carries out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 42 owing to carry out the same processing of the spatial manipulation portion that possesses with visual processing unit 12, therefore omits explanation.
Visual handling part 52 possesses: subtraction portion 44, and as the 1st input, as the 2nd input, output is as the difference signal DS of the difference of each with unsharp signal US with input signal IS for it; Intensive treatment portion 45, it as input, is exported difference signal DS with intensive treatment signal TS; Amount of reinforcement adjustment part 46, it as input, adjusts signal IC output with amount of reinforcement with input signal IS; Multiplier 47, it adjusts signal IC as the 1st input with amount of reinforcement, and as the 2nd input, the multiplying signal MS after amount of reinforcement adjusted signal IC and intensive treatment signal TS and multiply each other exports with intensive treatment signal TS; With addition portion, it as the 1st input, as the 2nd input, is exported multiplying signal MS input signal IS with output signal OS.
[effect of the visual processing unit 51 of equivalence]
Action about visual handling part 52 further describes.
Subtraction portion 44, intensive treatment portion 45, amount of reinforcement adjustment part 46 and multiplier 47, carry out with at the illustrated same action of visual processing unit shown in Figure 20 41.
Addition portion 53, with the value A input signal IS, with the value F6 (A) * F7 (A-B) multiplying signal MS addition, the output signal OS of output valve A+F6 (A) * F7 (A-B).
In addition, use amount of reinforcement to adjust function F 6, strengthen the calculating of function F 7, and at illustrated same of visual processing unit shown in Figure 20 41, can use 1 dimension LUT to carry out, also can not use LUT to carry out for each function.
[effect]
Possess the visual processing unit 1 and the visual processing unit 51 of the 5th description document data, realize same visual treatment effect.And, play the effect that the visual processing unit 1 that possesses the 4th description document data and visual processing unit 41 are realized, be same substantially visual effect.
(i)
By the value A of input signal IS, carry out the adjustment of the amount of reinforcement of difference signal DS.Therefore, can make the amount of reinforcement of contrast even from dark portion to bright portion.
[variation]
(i)
When making reinforcement function F 7 become the direct proportion function of proportionality coefficient 1, do not need to be provided with especially intensive treatment portion 45.
(ii)
In addition, when surpassing the scope of 0≤C≤255, the value C that can make this key element is 0 or 255 at the value C of certain key element of the description document data of being obtained by formula M5.
(6)
[the 6th description document data]
The 6th description document data are based on for the value after the difference of strengthening between input signal IS and the unsharp signal US, add the value after the value of input signal ID, and its computing of carrying out gray correction is determined.Thus, the input signal IS after can realizing for example strengthening for clear composition carries out the visual processing of gray correction.
Below specifically describe.
The value C of each key element of the 6th description document data (value of output signal OS), be to use value A, the unsharp signal US of input signal IS value B, strengthen function F 9, gray correction function F 10, be expressed as C=F10 (A+F9 (A-B))) (below be called formula M6).
At this, strengthen function F 9, be to use any function among the illustrated reinforcement function R 1~R3 of Figure 109.Gray correction function F 10 is the functions that use in gray correction function of for example gamma correction function, the gray correction function of S font, the font of falling S etc., the general gray correction.
[the visual processing unit 61 of equivalence]
Figure 22, the visual processing unit 61 of expression and visual processing unit 1 equivalence after login the 6th description document data in 2 dimension LUT4.
Visual processing unit 61 is based on for the value after the difference of strengthening between input signal IS and the unsharp signal US, adds value after the value of input signal IS, this value is carried out the computing of gray correction, the device that output signal OS is exported.Like this, for example can realize carrying out the visual processing of gray correction for the input signal IS that strengthens behind the clear composition.
Visual processing unit 61 as shown in figure 22 possesses: spatial manipulation portion 62, and it carries out spatial manipulation by the brightness value of each pixel of the original image that is obtained as input signal IS, and unsharp signal US is exported; With visual handling part 63, it uses input signal IS and unsharp signal US, carries out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 62 owing to carry out the same action of the spatial manipulation portion that possesses with visual processing unit 12, therefore omits explanation.
Visual handling part 63 possesses: deduct portion 64, its with input signal IS as the 1st input, with unsharp signal US as the 2nd input, will be as the difference signal DS output of difference of each; Intensive treatment portion 65, its output is carried out intensive treatment signal TS after the intensive treatment with difference signal DS as input; Addition portion 66, as the 1st input, as the 2nd input, output is with the additive signal PS after both additions with intensive treatment signal TS with input signal IS for it; With gray correction portion 67, it is exported additive signal PS as importing, output signal OS being exported.
[effect of the visual processing unit 61 of equivalence]
Action about visual handling part 63 further is illustrated.
Subtraction portion 64 is calculated the difference of the unsharp signal US of the input signal IS of value A and value B, difference signal DS output that will value A-B.Intensive treatment portion 65, it uses strengthens function F 9, will be worth the intensive treatment signal TS output of F9 (A-B) according to the difference signal DS of value A-B.Addition portion 66, the intensive treatment signal TS addition that it will be worth input signal IS and the value F9 (A-B) of A, the additive signal PS that will be worth A+F9 (A-B) exports.Gray correction portion 67, it uses gray correction function F 10, according to the additive signal of value A+F9 (A-B), will be worth the output signal OS output of F10 (A+F9 (A-B)).
In addition, use the calculating of strengthening function F 9, gray correction function F 10, can use 1 dimension LUT to carry out, also can not use LUT to carry out for each function.
[effect]
Possess the visual processing unit 1 and the visual processing unit 61 of the 6th description document data, realize same visual treatment effect.
(i)
Difference signal DS carries out intensive treatment by strengthening function F 9, with input signal IS addition.Therefore, can strengthen the contrast of input signal IS.Further, gray correction portion 67, the gradation correction processing of execution additive signal PS.Therefore, the semi-tone (halftone) that for example frequency of occurrences in original image is high further can be strengthened contrast.And, for example, the whole change of additive signal PS become clear.By more than, combination just can realize to make spatial manipulation and gray scale handle simultaneously.
[variation]
(i)
In visual processing unit 61, visual handling part 63 based on input signal IS and unsharp signal US, but does not use the just above-mentioned formula M6 of computing of 2 dimension LUT4.In this case, in the calculating of each function F 9, F10, just can use 1 dimension LUT.
(ii)
In addition, when the value C of certain key element of the description document data of being obtained by formula M6 was the scope of 0≤C≤255, the value of this key element can be 0 or 255.
(7)
[the 7th description document data]
The 7th description document data are based on for the value after the difference of having strengthened input signal IS and unsharp signal US, add that the computing that input signal IS is carried out the value after the gray correction determines.At this, the gray correction of the reinforcement of clear composition and input signal IS is carried out separately.Therefore, no matter the gray correction amount of input signal IS how, all can be carried out the reinforcement of constant clear composition.
Below specifically describe.
The value C of each key element of the 7th description document data (value of the OS of output signal), be value A, unsharp signal US for input signal IS value B, strengthen F11, the gray correction function F 12 of function, be expressed as C=F12 (A)+F11 (A-B) (below be called formula M7).
At this, strengthen function F 11, be to use the arbitrary function among the illustrated reinforcement function R 1~R3 of Figure 109.Gray correction function F 12 is gray correction functions of gamma correction function, the gray correction function of S font, the font of falling S for example.
[the visual processing unit 71 of equivalence]
Figure 23, the visual processing unit 71 of expression and visual processing unit 1 equivalence after login the 7th description document data in 2 dimension LUT.
Visual processing unit 71 is for the value after the difference of having strengthened between input signal IS and the unsharp signal US, adds the computing of input signal IS being carried out the value after the gray correction, with the device of output signal OS output.At this, the gray correction of the reinforcement of clear composition and input signal IS is carried out separately.Therefore, no matter the gray correction amount of input signal IS how, all can be carried out the reinforcement of constant clear composition.
Visual processing unit 71 as shown in figure 23 possesses: spatial manipulation portion 72, and it is carried out spatial manipulation unsharp signal US is exported by the brightness value of each pixel of the original image that is obtained as input signal IS; With visual handling part 73, it uses input signal IS and unsharp signal US, carries out the visual processing of original image, and output signal OS is exported.
Spatial manipulation portion 72 owing to carry out the same action of the spatial manipulation portion that possesses with visual processing unit 12, therefore omits explanation.
Visual handling part 73 possesses: subtraction portion 74, its with input signal IS as the 1st input, with unsharp signal US as the 2nd input, will be as the difference signal DS output of difference of each; Intensive treatment portion 75, its output is carried out intensive treatment signal TS after the intensive treatment with difference signal DS as input; Gray correction portion 76, it as input, is exported input signal IS with the gray correction signal GC after the gray correction; With addition portion 77, it as the 1st input, as the 2nd input, is exported intensive treatment signal TS gray correction signal GC with output signal OS.
[effect of the visual processing unit 71 of equivalence]
Action about visual handling part 73 further is illustrated.
Subtraction portion 74, the input signal IS of calculated value A, and the value B unsharp signal US between difference, the difference signal DS of output valve A-B.Intensive treatment portion 75 uses and strengthens function F 11, according to the difference signal DS of value A-B, the intensive treatment signal TS of output valve F11 (A-B).Gray correction portion 76 uses gray correction function F 12, according to the input signal IS of value A, will be worth the gray correction signal GC output of F12 (A).Addition portion 77, its will be worth F12 (A) gray correction signal GC, with the intensive treatment signal TS addition of value F11 (A-B), the output signal OS that will be worth F12 (A)+F11 (A-B) exports.
In addition, use the calculating of strengthening function F 11, gray correction function F 12, can use 1 dimension LUT to carry out, also can not use LUT to carry out for each function.
[effect]
Possess the visual processing unit 1 and the visual processing unit 71 of the 7th description document data, realize identical visual treatment effect.
(i)
Input signal IS is after carrying out gray correction by gray correction portion 76, with intensive treatment signal TS addition.Therefore, even in the less zone of the grey scale change of gray correction function F 12, be the zone that contrast reduces, the addition of the intensive treatment signal TS by the back also can be strengthened local contrast.
[variation]
(i)
In visual processing unit 71, visual handling part 73 based on input signal IS and unsharp signal US, but does not use the yet above-mentioned formula M7 of computing of 2 dimension LUT.In this case, in the calculating of each function F 11, F12, even do not use 1 dimension LUT also can.
(ii)
In addition, when the value C of certain key element of the description document data of being obtained by formula M7 surpassed in the scope of 0≤C≤255, the value C of this key element can be used as 0 or 255.
(8)
[variation of the 1st~the 7th description document data]
In above-mentioned (1)~(7), each key element of the 1st~the 7th description document data has been described, the in store value of being calculated based on formula M1~M7.And, illustrated in each description document data, when the value of being calculated as through type M1~M7 surpasses the scope of the value that can preserve the description document data, then can limit the value of this key element.
And then, in the description document data,, can be arbitrarily about the value of a part.For example, very little light etc. in very dark night scene (in the night scene neon light part etc.), though very big in the value of input signal IS, when the value of unsharp signal US is very little, very little to the influence of image quality by the value of the input signal IS after the visual processing.Like this, the part that the value after visual processing is little to image quality influence, the value that the description document data are preserved can be the approximation of the value calculated of through type M1~M7, perhaps value arbitrarily.
Value in the preservation of description document data, when being the approximation of the value calculated of through type M1~M7 or arbitrary value, the preferred value of being preserved with respect to the input signal IS and the unsharp signal US of identical value, keep value, dull increasing or the dull relation that reduces for input signal IS and unsharp signal US.In description document data based on mades such as formula M1~M7, the value that the description document data of the input signal IS of identical value and unsharp signal US correspondence are preserved, the summary of expression description document data characteristic.Therefore, in order to keep the characteristic of 2 dimension LUT, preferably be described the tuning of file data under the state of above-mentioned relation keeping.
[the 2nd execution mode]
Use Figure 24~Figure 39, describe about visual processing unit 600 as the 2nd execution mode of the present invention.
Visual processing unit 600, be that picture signal (input signal IS) is carried out the visual processing unit of visual processing with visual processing image (output signal OS) output, carry out device with the corresponding visual processing of the environment that the display unit (not shown) that shows output signal OS is set (below be called display environment).
Specifically, visual processing unit 600 is to improve the device of reduction that influence because of the surround lighting of display environment causes " the visual contrast degree " of display image by the visual processing that utilizes human visual characteristic.
Visual processing unit 600 is a kind of machines that for example image of computer, television set, digital camera, portable phone, PDA, printer, scanner etc. is handled, and constitutes the device and the image processing apparatus of the look processing of carrying out picture signal.
[visual processing unit 600]
Figure 24 represents the basic comprising of visual processing unit 600.
Visual processing unit 600 is made of target contrast transformation component 601, figure signal handling part 602, actual contrast transformation component 603, target contrast configuration part 604, actual contrast configuration part 605.
Target contrast transformation component 601, with input signal IS as the 1st the input, will in target contrast configuration part 604, set target contrast C1 as the 2nd the input, target contrast signal JS is exported.In addition, about the definition back description document of target contrast C1.
Figure signal handling part 602, target contrast signal JS is imported as the 1st, target contrast C1 is imported as the 2nd, the actual contrast C2 that will set in actual contrast configuration part 605 is as the 3rd input, will be as the visual processing signals KS output of carrying out the target contrast signal JS after the visual processing.In addition, about the definition of actual contrast C2, the back description document.
Actual contrast transformation component 603 as the 1st input, as the 2nd input, is exported actual contrast C2 visual processing signals KS with output signal O.
Target contrast configuration part 604 and actual contrast configuration part 605 make the value of user via target setting contrast C 1 such as inputting interface and actual contrast C2.
Below, describe about the details of each one.
[target contrast transformation component 601]
Target contrast transformation component 601, the input signal IS that will import in visual processing unit 600 is transformed into the target contrast signal JS that is suitable for the contrast performance.At this, in input signal IS, to the gray scale performance of the brightness value of the image of visual processing unit 600 inputs with value [0.0~1.0].
Target contrast transformation component 601 uses target contrast C1 (value [m]), by " formula M20 " input signal IS (value " P ") is carried out conversion, with target contrast signal JS (value [A]) output.At this, formula M20 is A={ (m-1)/m} * P+1/m.
The value of target contrast C1 [m] is set to the display image that is shown by display unit and it seems the contrast value of contrast the best.
At this, so-called contrast value is the value of representing as the brightness of the black level of the relative image of white level, the brightness value (black level: white level=1: m) of the white level of expression when black level is 1.
Though the value of target contrast C1 [m], (black level: white level=1: 100~1: 1000), also be display unit based on the light of the displayable black level of the relative display unit of white level and determine that is fit to be set at 100~1000.
Use Figure 25, further describe the conversion that through type M20 carries out.Figure 25 is the relation between the value (longitudinal axis) of the value (transverse axis) of expression input signal IS and target contrast signal JS.As shown in figure 25, by target contrast change section 601, thus the target contrast signal JS of the scope of range input signal IS value of being transformed into of value [0.0~1.0] " 1/m~1.0 ".
[figure signal handling part 602]
Use Figure 24 that the details of figure signal handling part 602 is described.
Figure signal handling part 602, keep the local contrast of the target contrast signal JS that is imported, and meanwhile dynamic range is compressed, with visual processing signals KS output.Specifically, figure signal handling part 602, will be by the input signal IS (with reference to Figure 16) in the represented visual processing unit 21 of the 1st execution mode, regard target contrast signal JS as, have and output signal OS (with reference to Figure 16) is considered as the same formation of visual processing signals KS, effect, effect.
Figure signal handling part 602 is based on the computing of strengthening the ratio between target contrast signal JS and the unsharp signal US, with visual processing signals KS output.Like this, for example just can realize visual processing of strengthening clear composition etc.
Further, figure signal handling part 602 based on for target contrast signal JS after strengthening and the ratio between the unsharp signal US, carries out the computing of dynamic range compression, with visual processing signals KS output.Like this, Yi Bian just can realize strengthening clear composition, Yi Bian carry out the visual processing etc. of the compression of dynamic range.
[formation of figure signal handling part 602]
Figure signal handling part 602 possesses: spatial manipulation portion 622, and it carries out spatial manipulation to the brightness value of each pixel among the target contrast signal JS, and unsharp signal US is exported; With visual handling part 622, it uses target contrast signal JS and unsharp signal US, carries out the visual processing for target contrast JS, with visual processing signals KS output.
Spatial manipulation portion 622 owing to carry out the same action of the spatial manipulation portion that possessed with visual processing unit 1 (with reference to Fig. 1) 2, therefore omits detailed explanation.
Visual handling part 623 possesses: division portion 625, intensive treatment portion 626 and have DR compression unit 628 and the output processing part 627 of multiplier 629.
Division portion 625, as the 1st input, the 2nd input that unsharp signal US is done is with the division signal RS output of target contrast signal JS after divided by unsharp signal US with target contrast signal JS.Intensive treatment portion 626 as the 1st input, as the 2nd input, as the 3rd input, exports actual contrast C2 target contrast C1 division signal RS with intensive treatment signal TS.
Output processing part 627 as the 1st input, as the 2nd input, as the 3rd input, as the 4th input, is exported actual contrast C2 target contrast C1 intensive treatment signal TS target contrast signal JS with visual processing signals KS.DR compression unit 628 as the 1st input, as the 2nd input, as the 3rd input, is exported actual contrast C2 target contrast C1 target contrast signal JS with the DR compressed signal DRS after dynamic range (DR) compression.Multiplier 629 as the 1st input, as the 2nd input, is exported intensive treatment signal TS DR compressed signal DRS with visual processing signals KS.
[effect of figure signal handling part 602]
Figure signal handling part 602 uses target contrast C1 (value [m]) and actual contrast C2 (value [m]), by [formula M2] target contrast signal JS (value [A]) is carried out conversion, with visual processing signals KS (value [C]) output.At this, formula M2 uses dynamic range compression function F 4 and strengthens function F 5, shows as C=F4 (A) * F5 (A/B).In addition, value [B] is the value of target contrast JS being carried out the unsharp signal US after the spatial manipulation.
Dynamic range compression function F 4 is as [power function] of the monotone increasing function that raises up, and is expressed as F4 (x)=x^ γ.The exponent gamma of dynamic range compression function F 4 is used common logarithm, is expressed as γ=log (n)/log (m).Strengthening function F 5, is [power function], is expressed as F5 (x)=x^ (1-γ).
Below, be illustrated about the relation between the action of each one of formula M2 and figure signal handling part 602.
Spatial manipulation portion 622 carries out spatial manipulation for the target contrast signal JS of value [A], will be worth the unsharp signal US output of [B].
Division portion 625 will be worth the unsharp signal US of the target contrast signal JS of [A] divided by value [B], will be worth the division signal RS output of [A/B].Intensive treatment portion 626 uses and strengthens function F 5, will be worth the intensive treatment signal TS output of [F5 (A/B)] according to the division signal RS of value [A/B].DR compression unit 628 uses dynamic range compression function F 4, according to the target contrast signal JS of value [A], will be worth the DR compressed signal DRS output of [F4 (A)].Multiplier 629 multiplies each other to the DR compressed signal DRS of value [F4 (A)] and the intensive treatment signal TS of value [F5 (A/B)], will be worth the visual processing signals KS output of [F4 (A) * F5 (A/B)].
In addition, use the calculating of dynamic range compression function F 4, reinforcement function F 5, can use 1 dimension LUT to carry out, also can not use LUT to carry out for each function.
[effect of figure signal handling part 602]
Visual dynamic range among the visual processing signals KS is by the decision of the value of dynamic range compression function F 4.
Use Figure 26 to further describe the conversion that through type M2 carries out.Figure 26 represent target contrast signal JS value (transverse axis), and target contrast signal JS in use the curve chart of the relation between the value (longitudinal axis) after the dynamic range compression function F 4.As shown in figure 26, the dynamic range of target contrast signal JS is compressed by dynamic range compression function F 4.In more detail, by dynamic range compression function F 4, the target contrast signal JS of the scope of value [1/m~1.0], the scope of the value of being transformed to [1/n~1.0].Its result is that the visual dynamic range among the visual processing signals KS is compressed to 1/n (minimum value: maximum=1: n).
At this, C2 describes at actual contrast.The value of actual contrast C2 [n] is set the visual contrast degree value as the display image under the surround lighting of display environment.That is, the value of actual contrast C2 [n] can determine only to reduce value after the amount of influence that the brightness because of the surround lighting of display environment produces for the value [m] that makes target contrast C1.
Because use the value [n] according to the actual contrast C2 of such setting, so through type M2 is with the dynamic range of target contrast signal JS, from 1: m is compressed to 1: n.In addition, in this what is called " dynamic range ", be meant the ratio of maximum with the minimum value of signal.
On the other hand, the variation of the local contrast among the visual processing signals KS, expression is as the ratio of the variable quantity of the conversion front and back between the value [C] of the value [A] of target contrast signal JS and visual processing signals KS.At this, the value [B] of the unsharp signal US in the part is narrow scope is considered as constant.Therefore, the ratio between the variable quantity of the variable quantity of the value C among the formula M2 and value A becomes 1, and the local contrast between target contrast signal JS and the visual processing signals KS does not change.
Human vision, if having the local contrast of keeping, even then whole contrast reduces, the character of also visible identical contrast.Therefore, in figure signal handling part 602, Yi Bian can realize carrying out the compression of the dynamic range of target contrast signal JS, Yi Bian the visual processing that does not allow the visual contrast degree reduce.
[actual contrast transformation component 603]
Use Figure 24, describe at the details of actual contrast transformation component 603.
Actual contrast transformation component 603, visual processing signals KS is transformed to can be to the view data in the scope of display unit (not shown) input.Can be to the view data in the scope of display unit input, be for example with the view data behind the brightness value of the gray scale presentation video of value [0.0~1.0].
Actual contrast transformation component 603 uses actual contrast C2 (value [n]), and through type " M21 " carries out conversion to visual processing signals KS (value [C]), with output signal OS (value " Q ") output.At this, formula M21 is Q={a/ (n-1) } * C-{1/ (n-1) }.
Use Figure 27, the conversion that through type M21 is carried out is described further.Figure 27 is the curve chart of the relation between the value (transverse axis) of the expression value (transverse axis) of visual processing signals KS and output signal OS.As shown in figure 27,, will be worth the interior visual processing signals KS of scope of [1/n~1.0] by actual contrast transformation component 603, the output signal OS in the scope of the value of being transformed to [0.0~1.0].At this, the value of each visual processing signals KS relatively, the value of output signal OS reduces.This reduction, each brightness that is equivalent to display image is subjected to the influence of surround lighting.
In addition, in actual contrast transformation component 603, when the visual processing signals KS below input value [1/n] is transfused to, output signal OS then, the value of being transformed to [0].And, in actual contrast transformation component 603, when [1] on duty above visual processing signals KS is transfused to, output signal OS then, the value of being transformed to [1].
[effect of visual processing unit 600]
Visual processing unit 600 is realized the effect same with the illustrated visual processing unit of the 1st execution mode 21.Below, put down in writing the characteristic effect in the visual processing unit 600.
(i)
When exist under the display environment at the output signal OS of display of visually processing unit 600 surround lighting the time, output signal OS, be subjected to surround lighting influence and by visual.But output signal OS is by actual contrast transformation component 603, is subjected to the signal of the processing of the influence of proofreading and correct surround lighting.That is, existing under the display environment of surround lighting, be presented at the output signal OS in the display unit, is the display image with characteristic of visual processing signals KS by vision.
The characteristic of so-called visual processing signals KS waits equally with the output signal OS (with reference to Figure 16) of the illustrated visual processing unit 21 of the 1st execution mode, Yi Bian keep local contrast, Yi Bian the dynamic range of compressed image integral body.That is, visual processing signals KS, Yi Bian be the target contrast C1 that keeps local display best image image, Yi Bian be compressed into the signal after displayable dynamic range under the influence of surround lighting (being equivalent to actual contrast C2).
Therefore, in visual processing unit 600, on one side can carry out the correction that existence because of surround lighting causes the contrast that reduces, Yi Bian keep visual contrast by the processing that utilizes visual characteristic.
[visual processing method]
Use Figure 28, describe with the visual processing method of the same effect of above-mentioned visual processing unit 600 realizing.In addition, because the concrete processing of each step is same with the processing in the above-mentioned visual processing unit 600, therefore omit explanation.
In visual processing method as shown in figure 28, at first, obtain the target contrast C1 and and the actual contrast C2 (step S601) that set.Then, use the target contrast C1 that is obtained, carry out the conversion (step S602) of input signal IS correspondence, target contrast signal JS is exported.Then, carry out spatial manipulation (step S603), unsharp signal US is exported for target contrast signal JS.Then, target contrast signal JS exports division signal RS divided by unsharp signal US (step S604).Division signal RS by being reinforced (step S605) as the reinforcement function F 5 with the index that is determined by target contrast C1 and actual contrast C2 [power function], exports intensive treatment signal TS.On the other hand, target contrast signal JS is by, being exported DR compressed signal DRS by dynamic range compression (step S606) as having by the dynamic range compression function F 4 of [power function] of target contrast C1 and the index that actual contrast C2 determined.Then, the intensive treatment signal TS that exports by step 695 and multiply each other (step S607) by the DR compressed signal DRS that step S606 is exported is with visual processing signals KS output.Then, use actual contrast C2, carry out the conversion (step S608) of visual processing signals KS correspondence, output signal OS is exported.About all pixels of input signal IS, the processing (step S609) of repeating step S602~step S608.
Each step of visual processing method as shown in figure 28 in visual processing unit 600 or other computer etc., can be used as visual handling procedure and realizes.And the processing of step S604~step S607 can be undertaken once by calculating formula M2.
[variation]
The present invention is not limited to this above-mentioned execution mode, also various distortion or correction can be arranged in not departing from the scope of the present invention.
When (i) not possessing formula M2-reinforcement function F 5
In the above-described embodiment, put down in writing figure signal handling part 602, visual processing signals KS has been exported based on formula M2.At this, figure signal handling part 602 can only be strengthened function F 4 based on dynamic range, with visual processing signals KS output.In this case, in figure signal handling part 602, do not need to possess spatial manipulation portion 622, division portion 625, intensive treatment portion 626, multiplier 629, as long as possess DR compression unit 628 as variation.
In figure signal handling part 602 as variation, can be with the visual processing signals KS output that is compressed to after displayable dynamic range under the influence of surround lighting.
(ii) strengthen function F 5-index, other variation
In the above-described embodiment, having put down in writing reinforcement function F 5, is [power function], is expressed as F5 (x)=x^ (1-γ).At this, strengthen the index of function F 5, can be the function of the value [B] of the value [A] of target contrast signal KS or unsharp signal US.
Below represent concrete example (1)~(6).
(1)
Strengthening the index of function F 5, is the function of the value [A] of target contrast signal JS, when the value [A] of target contrast signal JS is also bigger than the value [B] of unsharp signal US, is the function of dullness minimizing.More particularly, strengthen the index of function F 5, be expressed as α 1 (A) * (1-γ), function alpha 1 (A) is as shown in figure 29 for the dull function that reduces of value [A] of target contrast signal JS.In addition, the maximum of function alpha 1 (A) is [1.0].
At this moment, by strengthening function F 5 amount of reinforcement of the local contrast of high brightness is reduced.Therefore, when the brightness of the brightness ratio surrounding pixel of concerned pixel was also high, the reinforcement of local contrast that suppresses high brightness portion was too much.That is, the brightness value that suppresses concerned pixel is saturated to high brightness, becomes so-called anti-white state.
(2)
Strengthening the index of function F 5, is the index of the value [A] of target contrast signal JS, when the value [A] of target contrast signal JS is also littler than the value [B] of unsharp signal US, is monotone increasing function.More particularly, strengthen the index of function F 5, be expressed as α 2 (A) * (1-γ), function alpha 2 (A) is to be the dull function that increases for the value (A) of target contrast signal JS as shown in figure 30.In addition, the maximum of function alpha 2 (A) is [1.0].
At this moment, by strengthening function F 5, the amount of reinforcement of the local contrast of low-light level portion is tailed off.Therefore, when the brightness of the brightness ratio neighboring pixel of concerned pixel was also low, the reinforcement of local contrast that suppresses low-light level portion was too much.That is, suppress the brightness value of concerned pixel to saturated, the so-called state of cracking down upon evil forces of low-light level.
(3)
Strengthening the index of function F 5, is the function of the value [A] of target contrast signal JS, when the value [A] of target contrast JS is bigger than the value [B] of unsharp signal US, is the function that dullness increases.More particularly, strengthen the index of function F 5, be expressed as α 3 (A) * (1-γ), function alpha 3 (A) is the dull function that increases of value [A] for as shown in figure 31 target contrast signal JS.The maximum of function alpha 3 (A) is [1.0] in addition.
In this case, by strengthening function F 5, the amount of reinforcement of the local contrast of low-light level portion is tailed off.Therefore, when the brightness of the brightness ratio surrounding pixel of concerned pixel was high, the reinforcement of local contrast that suppresses low-light level portion was too much.Low-light level portion in the image, because signal level is less, therefore relative noise proportional is higher, but by carrying out such processing, can suppress the deterioration of SN ratio.
(4)
Strengthening the index of function F 5, is the function between the value [B] of the value [A] of target contrast signal JS and unsharp signal US, is the function of absolute value dullness minimizing of the difference of relative value [A] and value [B].In other words, strengthen the index of function F 5, the value of may also be referred to as [A] is similar to the function that increases about 1 with the ratio of value [B].More particularly, strengthen the index of function F 5, (A, B) * (1-γ), (A B), is the dull function that reduces of absolute value of value " A-B " relatively shown in figure 32 to function alpha 4 to be expressed as α 4.
At this moment, special strengthen with the less concerned pixel of the light and shade difference of surrounding pixel in local contrast, can suppress with the bigger concerned pixel of the light and shade difference of surrounding pixel in the reinforcement of local contrast.
In the operation result of the reinforcement function F 5 of above-mentioned (1)~(4), also the upper limit or lower limit can be set.Specifically, when (F5 (A/B)) on duty surpasses the set upper limit value, use the set upper limit value as the operation result of strengthening function F 5.And when (F5 (A/B)) on duty surpassed the lower limit of regulation, the lower limit of using regulation was as the operation result of strengthening function F 5.
In this case, amount of reinforcement that can be by strengthening the local contrast of function F 5 restrictions suppresses too much or the reinforcement of what contrast in suitable scope.
(6)
In addition, above-mentioned (1)~(5) are being used in above-mentioned the 1st execution mode under the situation of the computing of strengthening function F 5 and are being suitable for (for example the 1st execution mode [description document data] (2) or (3) etc.) too.In addition, in the 1st execution mode, value [A] is the value of input signal IS, and value [B] is the value of input signal IS being carried out the unsharp signal US after the spatial manipulation.
(iii) do not carry out under the situation of formula 2-dynamic range compression
In the above-described embodiment, figure signal handling part 602 has been described, have the formation same with the visual processing unit 21 shown in the 1st execution mode, at this, as the figure signal handling part 602 of variation, also can have and the same formation of visual processing unit 31 (with reference to Figure 19) shown in the 1st execution mode.Specifically,, regard output signal OS as visual processing signals KS, thereby realize figure signal handling part 602 as variation by regarding the input signal IS in the visual processing unit 31 as target contrast signal JS.
In this case, in figure signal handling part 602,, visual processing signals KS (value [C]) is exported based on " formula M3 " for target contrast signal JS (value (A)) and unsharp signal US (value (B)) as variation.At this, so-called formula M3 uses and strengthens function F 5, is expressed as C=A * F5 (A/B).
In the processing of the formula of use M3,, also can strengthen local contrast though impose dynamic range compression for input signal IS.The effect of strengthening by this local contrast, thus the various impression of " visually " dynamic range compression or expansion can be given.
In addition, for present embodiment, also can use above-mentioned [variation] (ii) (1)~(5) equally.That is, in this variation, strengthening function F 5, is [power function], and its index can be to have above-mentioned [variation] (ii) illustrated function alpha 1 (A), α 2 (A), α 3 (A), α 4 (A, B) function of same slope in (1)~(4).And, as above-mentioned [variation] (ii) (5) illustrated, can in strengthening the operation result of function F 5, the upper limit or lower limit be set.
(iv) parameter automatic setting
In the above-described embodiment, target contrast configuration part 604 and actual contrast configuration part 605 have been described, via inputting interface etc. and make the value of user's target setting contrast C 1 and actual contrast C2.At this, target contrast configuration part 604 and actual contrast configuration part 605, the value of automatically setting target contrast C 1 and actual contrast C2.
(1) display
The display unit that shows output signal OS, be displays such as PDP, LCD, CRT, at under the situation of white luminance (white level) that can show under the known state at no surround lighting and shiny black degree (black level), the actual contrast configuration part 605 of the value of automatic setting actual contrast C2 describes.
Figure 33, the actual contrast configuration part 605 of the value of expression automatic setting actual contrast C2.Actual contrast configuration part 605 possesses: the 605a of brightness measurement portion, storage part 605b, calculating part 605c.
The 605a of brightness measurement portion is the luminance sensor that shows that the brightness value of the surround lighting in the display environment of display of output signal OS is measured.Storage part 605b stores white luminance (white level) and shiny black degree (black level) that the display that shows output signal OS can show under the state of no surround lighting.Calculating part 605c obtains each value from 605a of brightness measurement portion and storage part 605b, calculate the value of actual contrast C2.
One example of the calculating of calculating part 605c is described.Calculating part 605c, the brightness value of the surround lighting that will be obtained from the 605a of brightness measurement portion, with the brightness value of the black level of storage part 605b storage and the brightness value addition respectively of white level.And then calculating part 605c with the addition result of using with the brightness value of black level, removes the value after the addition result with the added luminance of white level, as the value [n] of actual contrast C2.Like this, the value of actual contrast C2 [n] is illustrated in the value of display contrast of display degree in the display environment that has surround lighting.
And storage part 605b as shown in figure 33 can be with the ratio of display between white luminance that can show under the state of no surround lighting (white level) and shiny black degree (black level), and storage is as the value [m] of target contrast C1.In this case, actual contrast configuration part 605 is realized the function of the target contrast configuration part 604 of automatic setting target contrast C1 simultaneously.In addition, storage part 605b also can not store ratio, can be by the ratio calculated of calculating part 605c.
(2) projecting apparatus
The display unit that shows output signal OS, it is projecting apparatus etc., at working as the white luminance (white level) and shiny black degree (black level) that under the state of no surround lighting, can show, depend on till the screen apart from the time, the actual contrast configuration part 605 of the value of automatic setting actual contrast C2 describes.
Figure 34 represents the actual contrast configuration part 605 of the value of automatic setting actual contrast C2.Actual contrast configuration part 605 possesses: the 605d of brightness measurement portion, control part 605e.
The 605d of brightness measurement portion is the luminance sensor of measuring by the shown brightness value of output signal OS in display environment of projecting apparatus.Control part 605e carries out demonstration between white level and the black level for projecting apparatus.Further, to show brightness value when each level, calculate the value of actual contrast C2 from the 605d of brightness measurement portion.
Use Figure 35, action one example of control part 605e is described.At first, control part 605e in having the display environment of surround lighting, moves projecting apparatus, makes it to show white level (step S620).Control part 605e obtains the brightness (step S621) of measured white level from the 605d of brightness measurement portion.Then, control part 605e in having the display environment of surround lighting, moves projecting apparatus, makes it to show black level (step S622).Control part 605e from the 605d of brightness measurement portion, obtains the brightness (step S623) of measured black level.Control part 605e, the ratio between the brightness value of the white level that calculating is obtained and the brightness value of black level is with its value output as actual contrast C2.Like this, the value of actual contrast C2 [n] is illustrated in the value of projecting apparatus contrast of display degree in the display environment that has surround lighting.
In addition, with above-mentioned same, by calculating the ratio between the white level and black level in the display environment that does not have surround lighting, thereby also can derive the value [m] of target contrast C1.In this case, actual contrast configuration part 605 is realized the function of the target contrast configuration part 604 of automatic setting target contrast C1 simultaneously.
(v) other signal space
In the above-described embodiment, the processing in the visual processing unit 600 being described, is to carry out at the brightness of input signal IS.At this, the present invention is not only effective under the situation that input signal IS is represented by the YCbCr color space.Input signal IS also can be by demonstrations such as yuv color space, the Lab color space, the Luv color space, the YIQ color space, the XYZ color space, the YPbPr color spaces.Under these situations,, can carry out the illustrated processing of above-mentioned execution mode for brightness, the brightness of each color space.
And when input signal IS was represented by the RGB color space, the processing in the visual processing unit 600 can be carried out separately for each composition of RGB.That is,, handle separately, the RGB composition output of target contrast signal JS by target contrast transformation component 601 for the RGB composition of input signal IS.And then, for the RGB composition of target contrast signal JS, handle separately, with the RGB composition output of visual processing signals KS by figure signal handling part 602.Further, for the RGB composition of visual processing signals KS, by carrying out the processing of actual contrast transformation component 603 separately, with the RGB composition output of output signal OS.At this, target contrast C1 and actual contrast C2 in each processing of RGB composition, use common value.
(vi) chromatic aberration correction is handled
Visual processing unit 600 in order to suppress to cause the tone of output signal OS different with the tone of input signal IS because of the influence of the brightness composition after handling by figure signal handling part 602, thereby can further possess the chromatic aberration correction handling part.
Figure 36, expression possesses the visual processing unit 600 of chromatic aberration correction handling part 608.In addition, at the formation same, additional identical symbol with visual processing unit shown in Figure 24 600.In addition, input signal IS has the color space of YCbCr, at the Y composition, carries out the same processing illustrated with above-mentioned execution mode.Below, describe at chromatic aberration correction handling part 608.
Chromatic aberration correction portion 608, target contrast signal JS is imported (value [Yin]) as the 1st, visual processing signals KS is imported (value [Yout]) as the 2nd, the Cb composition of input signal IS is imported (value [CBin]) as the 3rd, the Cr composition of input signal IS is imported (value [CRin]) as the 4th, Cb composition after the chromatic aberration correction processing as the 1st output (value [CBout]), is exported (value [CRout]) with the Cr composition behind the chromatic aberration correction as the 2nd.
Figure 37 represents the summary that chromatic aberration correction is handled.Chromatic aberration correction handling part 608 has 4 inputs of [Yin], [Yout], [CBin], [CRin], by these 4 inputs of computing, thereby obtains [CBout], [CRout] 2 outputs.
So-called [CBout] and [CRout] is by difference and ratio between [Yin] and [Yout], derives based on the following formula that [CBin] and [CRin] proofreaied and correct.
[CBout] is based on a1 * ([Yout]-[Yin]) * [CBin]+a2 * (1-[Yout])/[Yin]) * [CBin]+a3 * ([Yout]-[Yin]) * [CRin]+a4 * (1-[Yout]/[Yin]) * [CRin]+[CBin] and (below be called formula CB) that derive.
[CRout] is based on a5 * ([Yout]-[Yin]) * (Cbin)+a6 * (1-[Yout]/[Yin]) * (CBin)+a7 * ([Yout]-[Yin]) * [CRin]+a8 * (1-[Yout]/[Yin]) * (CRin)+[CRin] derive (below be called formula CR).
Among coefficient a1~a5 among formula CB and the CR, illustrated infer computing and use in advance the value that is determined according to the external computing device of visual processing unit 600 etc. by following
Use Figure 38, describe at the computing of inferring of the coefficient a1~a8 in the calculation element etc.
At first, obtain 4 inputs (step S630) of [Yin], [Yout], [CBin], [CRin].The value of each input is to be used for coefficient of determination a1~pre-prepd data of a8.For example, as [Yin], [CBin], [CRin], use with certain interval each all values that can get is removed value behind the mid portion.Further,, use value to input to the value that to export under the situation of figure signal handling part 602, it is removed value behind the mid portion with certain interval [Yin] as [Yout].According to such data of preparing, obtained as 4 inputs.
[Yin] that is obtained, [CBin], [CRin] are for conversion into the Lab color space, chromatic value [Ain] and [Bin] (step S631) in the Lab color space after the conversion of calculating institute.
Then, use acquiescence coefficient a1~a8, calculate [formula CB[and [formula CR], the value (step S632) of acquisition [CBout] and [CRout].The value that is obtained and [Yout] are transformed into the Lab color space, chromatic value [Aout] and [Bout] (the step S633) of the Lab color space after the conversion of calculating institute.
Then, the chromatic value [Ain], [Bin], [Aout], [Bout] that use institute to calculate, calculating valuation functions (step S634) judges whether the value of valuation functions becomes below the threshold value of regulation.At this, valuation functions, be [Ain] and [Bin], and [Aout] and [Bout] between tone variations hour, become the function of less value, for example be, the function of the quadratic sum of the deviation of each composition.More particularly, valuation functions is ([Ain]-[Aout]) ^2+ ([Bin]-[Bout]) ^2 etc.
When the value of valuation functions is bigger than the threshold value of regulation (step S635), correction factor a1~a8 (step S636) uses new coefficient, repeatedly the computing of step S632~step S635.
When the threshold value hour (step S635) that the value ratio of valuation functions is stipulated, the coefficient a1~a8 that uses in the calculating with valuation functions is as result's output (step S637) of inferring computing.
In addition, in inferring computing, can use 1 group in the combination of 4 inputs of pre-prepd [Yin], [Yout], [CBin] [CRin], infer operation coefficient a1~a8, but also can use many groups in the combination to carry out above-mentioned processing, be that minimum coefficient a1~a8 exports as the result who infers computing with making valuation functions.
(variation in the chromatic aberration correction processing)
(1)
In above-mentioned chromatic aberration correction handling part 608, make the value of target contrast signal JS be [Yin], make the value of visual processing signals KS be [Yout], make the value of the Cb composition of input signal IS be [CBin], make the value of the Cr composition of input signal IS be [CRin], make the value of the Cb composition of output signal OS be [CBout], make the value of the Cr composition of output signal OS be [CRout].At this, [Yin], [Yout], [CBin], [CRin] [CBout], [CRout], value that also can signal.
For example, when input signal IS was the signal of the RGB color space, target contrast transformation component 601 (with reference to Figure 24) was handled for each composition of input signal IS.In this case, the signal transformation of the RGB color space after handling can be become the signal of the YCbCr color space, make the value of this Y composition be [Yin], make the value of Cb composition be [CBin], make the value of Cr composition be [CRin]
Further, when output signal OS is the signal of the RGB color space, [Yout], [CBout], [CRout] that is derived can be for conversion into the RGB color space, carry out conversion process for each composition by actual contrast transformation component 603, as output signal OS.
(2)
Chromatic aberration correction handling part 608 can use the ratio of figure signal handling part 602 at the signal value of handling front and back, and each composition of RGB that is input to chromatic aberration correction handling part 608 is carried out treatment for correcting.
Use Figure 39, describe at structure as the visual processing unit 600 of variation.In addition, at the part of the same substantially function of realization and visual processing unit 600 shown in Figure 36, additional identical symbol omits its explanation.As the visual processing unit 600 of variation, constitute as characteristic, possess luminance signal generating unit 610.
Each composition as the input signal IS of the signal of the RGB color space in target contrast transformation component 601, is transformed into the target contrast signal JS of the signal of the RGB color space.About detailed processing, omission is used for above-mentioned processing.At this, make the value of each composition of target contrast signal JS be [Rin], [Gin], [Bin].
Luminance signal generating unit 610, according to each composition of target contrast signal JS, the luminance signal of generation value [Yin].By with the value of each composition of RGB with certain ratio addition, thereby the luminance signal of obtaining.For example, value [Yin] is obtained by following formula [Yin]=0.299 * [Rin]+0.587 * [Gin]+0.114 * [Bin] etc.
Figure signal handling part 602 is handled the luminance signal of value [Yin], will be worth the visual processing signals KS output of [Yout].Detailed processing is owing to according to target contrast signal JS that the processing in the figure signal handling part 602 (with reference to Figure 36) of visual processing signals KS output is same, therefore omit explanation.
Chromatic aberration correction portion 608, use luminance signal (value [Yin]), visual processing signals KS (value [Yout]), target contrast signal JS (value [Rin], [Gin], [Bin]), will be as chromatic aberration correction signal (value [Rout], [Gout], [the Bout]) output of the signal of the RGB color space.
Specifically, in chromatic aberration correction handling part 608, calculated value [Yin] and the ratio that is worth [Yout] (value ([Yout]/[Yin])).With the ratio that is calculated,, multiply each other with each composition of target contrast signal JS (value [Rin], [Gin], [Bin]) as chromatic aberration correction coefficient.Like this, with chromatic aberration correction signal (value [Rout], [Gout], [Bout]) output.
Actual contrast transformation component 603 carries out conversion for each composition as the chromatic aberration correction signal of the signal of the RGB color space, is for conversion into the output signal OS as the signal of the RGB color space.About detailed processing, omission is used for above-mentioned explanation.
In the visual processing unit 600 as variation, the processing in the figure signal handling part 602 only is the processing for luminance signal, does not need to handle about each composition of RGB.Therefore, alleviate load for the visual processing of the input signal IS of the RGB color space.
(3)
" formula CB " and " formula CR " is an example, also can use other formula.
(vii) visual handling part 623
Visual handling part 623 as shown in figure 24 also can form by 2 dimension LUT.
In this case, the value of the in store value with target contrast signal JS of the 2 dimension LUT visual processing signals KS relative with the value of unsharp signal US.More particularly, based on illustrated [the formula M2] in [the 1st execution mode] (description document data) (2) (the 2nd description document data), determine the value of visual processing signals KS.In addition, in [formula M2],, use value as the unsharp signal US of value B as the value of the target contrast signal JS that is worth A.
Visual processing unit 600 possesses 2 such dimension LUT in storage device (not shown).At this, storage device can be built in the visual processing unit 600, also can be via wired or wireless and outside the connection.Be stored in each the 2 dimension LUT in the storage device, in addition related with the value of the value of target contrast C1 and actual contrast C2.That is,, carry out and the illustrated same computing of [the 2nd execution mode] (figure signal handling part 602) (effect of figure signal handling part 602), be stored as 2 dimension LUT for each combination between the value of the value of target contrast C1 and actual contrast C2.
Visual handling part 623, in case obtain value between target contrast C1 and the actual contrast C2, then read in be stored in storage device in 2 dimension LUT in, each value that obtains be associated 2 tie up LUT.Further, visual handling part 623 uses the 2 dimension LUT that read in, and carries out visual processing.Specifically, visual handling part 623 obtains the value of target contrast signal JS and the value of unsharp signal US, and the value of reading the visual processing signals KS corresponding with the value that is obtained from 2 dimension LUT is exported visual processing signals KS.
(the 3rd execution mode)
At as the 3rd execution mode of the present invention, describe by the application examples of the illustrated visual processing unit of above-mentioned the 1st execution mode and the 2nd execution mode, visual processing method, visual handling procedure and the system that uses it.
Visual processing unit, be for example built-in or be connected in the machine that for example computer, television set, digital camera, portable phone, PDA, printer, scanner etc. handle image, carry out the device of the visual processing of image, realize integrated circuit as LSI etc.
In more detail, each functional module of above-mentioned execution mode can be integrated into chip separately, also can comprise part or all and be integrated into chip.In addition, at this, though,, be also referred to as IC, system LSI, ultra-large LSI, great scale LSI because of the difference of integrated level as LSI.
And the method for integrated circuit is not limited to LSI, also can realize by special circuit or general processor.After LSI makes, also can utilize the connection of circuit unit of programmable FPGA (Field PragrammableGate Array, field programmable gate array) or restructural LSI inside or the reconfigurable processor of setting.
And then, if the technology of the integrated circuit of other technological displacement LSI of use development of semiconductor or derivation then also can use this technology to carry out the integrated of functional module certainly.Also can be useful on the application of biotechnology etc.
The processing of each module of each visual processing unit that above-mentioned the 1st execution mode and the 2nd execution mode are illustrated, the central processing unit (CPU) that possesses by for example visual processing unit carries out.And the program that is used to carry out each processing is kept in the storage devices such as hard disk, ROM, reads and carries out in ROM or among the RAM.
In addition, in the visual processing unit 1 of Fig. 1,2 dimension LUT4 are kept in the storage devices such as hard disk, ROM, as required and reference.Further, visual handling part 3 from directly that is connected with visual processing unit 1 or via the description document data entry device 8 that network connects indirectly, receives the description document data that provided, and ties up LUT4 as 2 and logins.
And visual processing unit also can be built-in or be connected in the machine that dynamic image is handled, and carries out the device that the gray scale of the image of every frame (each field) is handled.
Visual handling procedure, be built in the machine that for example computer, television set, digital camera, portable phone, PDA, printer, scanner etc. handle image, perhaps in the device that is connected, be stored in the storage devices such as hard disk, ROM, the program of the visual processing of carries out image, for example, via recording mediums such as CD-ROM, perhaps provide via network.
(2)
The visual processing unit that above-mentioned the 1st execution mode and the 2nd execution mode are illustrated also can be by the expression that constitutes of Figure 40~shown in Figure 41.
(1)
(formation)
Figure 40 is the block diagram of expression realization with the function of the visual processing unit 910 that for example uses visual processing unit 625 said functions shown in Figure 7.
In visual processing unit 910, transducer 911 and user's input part 912 have and the same function of input unit 527 (with reference to Fig. 7).More particularly, transducer 911, be a kind of at the environment that visual processing unit 910 is set, perhaps show the transducer that detects from the surround lighting in the environment of the output signal OS of visual processing unit 910, the parameter P1 as the expression surround lighting exports with detected value.And, user's input part 912, be to use the family step by step or without interruption (continuously) set environment light intensity be the device of for example " strong, in, weak ", with the parameter P1 output of the value that set as the expression surround lighting.
Efferent 914 has the said function with description document data entry portion 526 (with reference to Fig. 7).More particularly, efferent 914 possesses a plurality of description document data that are associated with the value of the parameter P1 that represents surround lighting.At this, so-called description document data, be meant provide with input signal IS with input signal IS is carried out spatial manipulation after the data of form of value of the corresponding output signal OS of signal.And then, efferent 914, will with the corresponding description document data of value of the parameter P1 of the surround lighting that obtained of expression, adjust parameter P2 as brightness, export to transformation component 915.
Transformation component 915 has and spatial manipulation portion 2 and the same function of visual handling part 3 (with reference to Fig. 7).Transformation component 915 will become the brightness of object pixel (concerned pixel) of the object of visual processing and the brightness of neighboring pixel and the brightness that is positioned at the periphery of object pixel and adjust parameter P2 as input, and output signal OS is exported in the brightness of transforming object pixel.
More particularly, transformation component 915 carries out spatial manipulation to object pixel and neighboring pixel.And then transformation component 915 is adjusted the value that parameter P2 reads the output signal OS corresponding with the result after object pixel and the spatial manipulation according to the brightness of form, exports as output signal OS.
(variation)
(1)
In the above-described configuration, parameter P2 is adjusted in brightness, is not to be defined in the foregoing description file data.For example, parameter P2 is adjusted in brightness, also can be the coefficient matrix data of using when the value according to the brightness computing output signal OS of the brightness of object pixel and neighboring pixel.At this, so-called coefficient matrix data are data that the coefficient part of the function that uses when the value according to the brightness computing output signal OS of the brightness of object pixel and neighboring pixel is preserved.
(2)
Efferent 914 does not need to possess corresponding description document data or the coefficient matrix data of all values with the parameter P1 of expression surround lighting.In this case, according to the parameter P1 of the surround lighting that obtained of expression, by to dividing or outer the branch in suitable the carrying out such as description document data that possessed, thus just can generate suitable description document data,
(2)
(formation)
Figure 41 realizes the block diagram of formation of the visual processing unit 920 of the function same with for example using visual processing unit shown in Figure 24 600 for expression.
In visual processing unit 920, efferent 921 also further obtains external parameter P3 except that the parameter P1 of expression surround lighting, based on parameter P1 that represents surround lighting and external parameter P3 parameter P2 output is adjusted in brightness.
At this, the parameter P1 of so-called expression surround lighting, with above-mentioned (1) put down in writing same.
And so-called external parameter P3 is to represent for example parameter of the visual effect of the user's request of visual output signal OS.More particularly, the value (target contrast) of contrast of the user's request of visual image etc.At this, external parameter P3, (with reference to Figure 24) sets by target contrast configuration part 604.Perhaps, use is stored in the default value in the efferent 921 in advance and sets.
Efferent 921 according to the parameter P1 of expression surround lighting, calculates the value of actual contrast according to Figure 33 or formation shown in Figure 34, and adjusts parameter P2 as brightness and export.In addition, efferent 921 is adjusted parameter P2 output with external parameter P3 (target contrast) as brightness.And, efferent 921, store a plurality of by [the 2nd execution mode] (variation) (the description document data of preserving among the vii) illustrated 2 dimension LUT, from the actual contrast that parameter P1 calculated, select the description document data, the data of this form are adjusted parameter P2 output as brightness according to external parameter P3 and expression surround lighting.
Transformation component 922 has and target contrast transformation component 601, figure signal handling part 602, the same function of actual contrast transformation component 603 (above with reference to Figure 24).More particularly, in transformation component 922, imported input signal IS (brightness of object pixel and the brightness of neighboring pixel) and brightness and adjusted parameter P2, output signal OS has been exported.For example, input signal IS uses as brightness and adjusts the target contrast that parameter P2 is obtained, and is transformed into target contrast signal JS (with reference to Figure 24).And then target contrast signal JS is performed spatial manipulation, and unsharp signal US (with reference to Figure 24) is derived.
Transformation component 922, possess as [the 2nd execution mode] (variation) (visual handling part 623 of vii) illustrated variation, according to adjusting description document data, target contrast signal JS, the unsharp signal US that parameter P2 is obtained, visual processing signals KS (with reference to Figure 24) is exported as brightness.Further, visual processing signals KS uses as brightness and adjusts the actual contrast that parameter P2 is obtained, and is transformed into output signal OS.
In this visual processing unit 920, parameter P1 based on external parameter P3 and expression surround lighting, just can select the description document data used in the visual processing, proofread and correct the influence that produces because of surround lighting simultaneously, even also can improve local contrast existing under the environment of surround lighting.Can approach the contrast that the user of visual output signal OS likees.
(variation)
In addition, even in this formation, also can carry out the same distortion of being put down in writing with (1).
And, the formation that the formation that (1) is put down in writing and (2) are put down in writing, changeable as required use.Switching can be used from the switching signal of outside and carry out.And, also can judge whether certain formation of using external parameter P3 whether to exist.
In addition, though put down in writing actual contrast, being calculated by efferent 921, also can be that the value of actual contrast is directly inputed to efferent 921.
(3) in formation as shown in figure 41, can further use to be used to make from efferent 921 to mechanism that the input of transformation component 922 does not sharply change.
Visual processing unit 920 ' as shown in figure 42 for visual processing unit 920 as shown in figure 41, changes this some difference of adjustment part slowly in the time that possesses the parameter P1 that makes the expression surround lighting.As input, export the parameter P1 of expression surround lighting adjustment part 925 with adjusted output P4.
Like this, efferent 921 can obtain to represent not follow the parameter P1 of the surround lighting of rapid variation, its result to be, the time of the output of efferent 921 changes and also becomes slow.
Realize by for example iir filter adjustment part 925, at this, and in iir filter, the value [P4] of the output P4 of adjustment part 925, computing by [P4]=k1 * [P4] '+k2 * [P1].In addition in the formula, k1, k2, be get respectively on the occasion of parameter, [P1] is the value of the parameter P1 of expression surround lighting, [P4] ', be the value of delay output (for example output of last time) of the output P4 of adjustment part 925.In addition, the processing in the adjustment part 925 also can use iir filter formation in addition to carry out.
And then, adjustment part 925, visual processing unit 920 that also can be as shown in figure 43 " like that, to purchase in the outlet side of efferent 921, the time that directly makes brightness adjust parameter P2 becomes mechanism slowly.
At this, the action of adjustment part 925 and above-mentioned same.Specifically, the value [P4] of the output P4 of adjustment part 925, computing by [P4]=k3 * [P4] '+k4 * [P2].In formula, k3, k4 are the parameters of getting positive value respectively in addition, and [P2] is the value that parameter P2 is adjusted in brightness, [P4] ', be the value of the delay output (for example output last time) of the output P4 of adjustment part 925.In addition, the processing in the adjustment part 925 can use iir filter formation in addition to carry out.
By the formation shown in Figure 42, Figure 43 etc., may command represents that the parameter P1 of surround lighting or the time that parameter P2 is adjusted in brightness change.Therefore, the transducer 911 of ambient light for example, even the people that response is moved before transducer at short notice under the situation that parameter alters a great deal, also can suppress rapid parameter change.Its result is to suppress the flicker of display frame.
(the 4th execution mode)
In the 4th~the 6th execution mode, can solve the illustrated gray scale in the past of use Figure 104~Figure 107 and handle corresponding following problem.
(problem that gray scale is in the past handled)
In histogram preparing department 302 (with reference to Figure 104), make gray-scale transformation curve Cm according to the pixel intensity histogram Hm in the image-region Sm.In order more suitably to make the gray-scale transformation curve Cm that is applicable to image-region Sm, the dark portion (shade) that does not need to spread all over image needs with reference to more pixel to brightness (highlighted).Therefore, can't make each image-region Sm enough little, promptly can't make original image to cut apart quantity n enough big.As cutting apart quantity n, different because of picture material, use rule of thumb that to cut apart number be 4~16.
According to like this,, therefore among the output signal OS after gray scale is handled, can produce following problem owing to can't make each image-region Sm enough little.That is,, therefore in some cases or the showy nature that seems of the joint on the border of each image-region Sm, perhaps produce blurred contour in the image-region Sm because each uses 1 gray-scale transformation curve Cm to carry out gray scale to handle to each image-region Sm.And, mostly being 4~16 most owing to cut apart quantity, image-region Sm is bigger at that time, therefore has extremely different images in some cases between the image-region, and the deep or light variation between the image-region becomes big, is difficult to prevent the generation of blurred contour.For example, shown in Figure 105 (b), Figure 105 (c), because of the relation of the position between image (for example the object in the image etc.) and the image-region Sm causes deep or light extreme variation.
Below, in the 4th~the 6th execution mode, use Figure 44~Figure 64, describe at handling the visual processing unit that can address the above problem for gray scale in the past.
(as the feature of the visual processing unit 101 of the 4th execution mode)
Use Figure 44~Figure 48 to describe at visual processing unit 101 as the 4th execution mode of the present invention.Visual processing unit 101 is a kind of built-in or be connected in the machine that for example computer, television set, digital camera, portable phone, PDA etc. handle image, carries out the device that the gray scale of image is handled.Visual processing unit 101, compared with the past have be characterised in that, carry out gray scale at each image-region after trickle cutting apart and handle this point.
(formation)
Figure 44, the block diagram of the structure of the visual processing unit 101 of expression explanation.Visual processing unit 101 possesses: image segmentation portion 102, and it will be divided into a plurality of image-region Pm (1≤m≤n:n is the quantity of cutting apart of original image) as the original image of input signal IS input; Gray-scale transformation curve leading-out portion 110, it derives gray-scale transformation curve Cm for each image-region Pm; With gray scale handling part 105, it is written into gray-scale transformation curve Cm, will export for the output signal OS that each image-region Pm carries out after gray scale is handled.Gray-scale transformation curve leading-out portion 110, it comprises: histogram preparing department 103, it makes the brightness histogram Hm of the pixel of the wide area image-region Em that the image-region by each image-region Pm and image-region Pm periphery constitutes; With grey scale curve preparing department 104, it makes the gray-scale transformation curve Cm of each image-region Pm correspondence according to the brightness histogram Hm of made.
(effect)
Use Figure 45~Figure 47, be illustrated at the action of each several part.Image segmentation portion 102 will be divided into a plurality of (n) image-region Pm (with reference to Figure 45) as the original image of input signal IS input.At this, original image cut apart quantity, such as the visual processing unit 300 in the past shown in Figure 104 to cut apart quantity (for example cutting apart quantity for 4~16) more, for example on transverse direction, be divided into 80, be divided into 60 4800 at longitudinal direction and cut apart quantity etc.
Histogram preparing department 103 is for the brightness histogram Hm of each image-region Pm making wide area image-region Em.At this, so-called wide area image-region Em is meant to comprise the set of each image-region Pm at interior a plurality of image-regions, for example is the set of 25 image-regions of vertical 5 pieces at center, horizontal 5 pieces with image-region Pm.In addition, can't obtain at peripheral vertical 5 pieces of image-region Pm, the wide area image-region Em of horizontal 5 pieces by the position of image-region Pm in some cases.For example, for the image-region PI of the periphery that is positioned at original image, can't obtain peripheral vertical 5 pieces, the wide area image-region EI of horizontal 5 pieces at image-region PI.In this case, use with image-region PI to the zone of vertical 5 horizontal 5 pieces of piece at center and original image overlapping areas as wide area image-region EI.The brightness histogram Hm of histogram preparing department 103 mades, the distribution of the brightness value of all pixels in the expression wide area image-region Em.That is, in the brightness histogram Hm shown in Figure 46 (a)~(c), transverse axis is represented the intensity level of input signal IS, longitudinal axis remarked pixel quantity.
Grey scale curve preparing department 104, the order of pressing brightness adds up " pixel quantity " of the brightness histogram Hm of wide area image-region Em, with the gray-scale transformation curve Cm (with reference to Figure 47) of this summation curve as image-region Pm.In gray-scale transformation curve Cm as shown in figure 47, transverse axis is represented the brightness value of the pixel of the image-region Pm among the input signal IS, and the longitudinal axis is represented the brightness value of the pixel of the image-region Pm among the output signal OS.Gray scale handling part 105 is written into gray-scale transformation curve Cm, based on gray-scale transformation curve Cm, the brightness value of the pixel of the image-region Pm among the input signal IS is carried out conversion.
(visual processing method and visual treatment system)
Figure 48, the flow chart of the visual processing method in the visual processing unit 101 of expression explanation.Visual processing method as shown in figure 48 realizes by the hardware in the visual processing unit 101, is the method for carrying out the gray scale processing of input signal IS (with reference to Fig. 1).In visual processing method as shown in figure 48, input signal IS is with processed (the step S110~S116) of image as unit.Original image as input signal IS input is divided into a plurality of image-region Pm (1≤m≤n:n is the quantity of cutting apart of original image) (step S111), carries out gray scale by each image-region Pm and handles (step S112~S115).
The brightness histogram Hm (step S112) of the pixel of the wide area image-region Em that making is made of the image-region of each image-region Pm and image-region Pm periphery.And then, based on brightness histogram Hm, make gray-scale transformation curve Cm (step S113) for each image-region Pm.At this,, omit explanation (with reference to above-mentioned (effect) hurdle) about brightness histogram Hm and gray-scale transformation curve Cm.Use the gray-scale transformation curve Cm of made, carry out gray scale at the pixel of image-region Pm and handle (step S114).And then, judge whether the processing about all images zone Pm finishes (step S115), be judged to be before processing finished, repeating step S112~S115 handles segmentation times to original image.By more than, the processing of constipation beam images unit (step S116).
In addition, each step of visual processing method as shown in figure 48 can realize as visual handling procedure by computer etc.
(effect)
(1)
Gray-scale transformation curve Cm, relatively each image-region Pm and making.Therefore, compare, can carry out suitable gray scale and handle with the situation of carrying out identical greyscale transformation for original image integral body.
(2)
For the gray-scale transformation curve Cm of each image-region Pm made, the brightness histogram Hm that is based on wide area image-region Em makes.Therefore, even the size of each image-region Pm diminishes, also can carry out the sampling of enough brightness values.And even its result is for less image-region Pm, also can make suitable gray-scale transformation curve Cm.
(3)
The wide area image-region of the image-region correspondence of adjacency has plyability.Therefore, the gray-scale transformation curve of the image-region correspondence of adjacency represents that similar tendency is more.Therefore, handle the effect that add spatial manipulation can for the gray scale of each image-region, can prevent the showy nature of joint on border of the image-region of adjacency.
(4)
The size of each image-region Pm, compared with the past less.Therefore, can suppress the generation of the blurred contour in the image-region Pm.
(variation)
The present invention is not to be defined in above-mentioned execution mode, in the scope that does not break away from its purport, also various distortion can be arranged.
(1)
In the above-described embodiment, though as the example of cutting apart quantity of original image, be divided into 4800, effect of the present invention is not to be defined in these situations, even other cut apart quantity and can access same effect yet.In addition, there are compromise relation in treating capacity and visual effect that gray scale is handled with the relevant quantitative aspects of cutting apart.That is, can obtain if cut apart the then treating capacity visual effect that increase, better handled of gray scale (for example the inhibition of blurred contour etc.) of quantity increase.
(2)
In the above-described embodiment, though conduct constitutes number one example of the image-region of wide area image-region, be 25, effect of the present invention is not to be defined in these situations, even other number also can access same effect.
(the 5th execution mode)
(as the feature of the visual processing unit 111 of the 5th execution mode)
At visual processing unit 111, use Figure 49~Figure 61 to describe as the 5th execution mode of the present invention.Visual processing unit 111 is a kind of built-in or be connected in the machine that for example computer, television set, digital camera, portable phone, PDA etc. handle image, carries out the device that the gray scale of image is handled.Visual processing unit 111 is characterized in that, many gray-scale transformation curves as the LUT storage is switched use this point in advance.
(formation)
Figure 49 represents to illustrate the block diagram of the structure of visual processing unit 111.Visual processing unit 111 possesses: image segmentation portion 112, selection signal leading-out portion 113, gray scale handling part 120.Image segmentation portion 112 as input, will be divided into image-region Pm (1≤m≤n, n are the quantity of cutting apart of the original image) output after a plurality of with input signal IS as the original image that input signal IS is transfused to.Select signal leading-out portion 113, will select signal Sm output, this selections signal Sm is used for being chosen in the gray-scale transformation curve Cm of the gray scale processing application of each image-region Pm.Gray scale handling part 120 possesses: gray scale processing execution portion 114 and gray correction portion 115.Gray scale processing execution portion 114, possess many gray-scale transformation curve candidate G1~Gp (p is a candidate quantity) as 2 dimension LUT, with input signal IS with select signal Sm, will carry out gray scale processing signals CS output after gray scale is handled at the pixel in each image-region Pm as input.Gray correction portion 115, as input, the output signal OS after will proofreading and correct the gray scale of gray scale processing signals CS exports with gray scale processing signals CS.
(about the gray-scale transformation curve candidate)
Use Figure 50, G1~Gp describes at the gray-scale transformation curve candidate.Gray-scale transformation curve candidate G1~Gp is a curve of giving the relation between the light and shade value of pixel of the brightness value of pixel of input signal IS and gray scale processing signals CS.In Figure 50, transverse axis is represented the brightness value of the pixel among the input signal IS, and the longitudinal axis is represented the brightness value of the pixel among the gray scale processing signals CS.Gray-scale transformation curve candidate G1~Gp is about the following few relation of target monotone decreasing, for the brightness value of the pixel of all input signal IS, satisfy G1 〉=G2 〉=... the relation of 〉=Gp.For example, gray-scale transformation curve G1~G is respectively [power function] of the brightness value of the pixel of order expression input signal IS when being variable, when being expressed as Gm=x^ (δ m) (1≤m≤p, x are variable, and δ m is a constant), satisfy δ 1≤δ 2≤... the relation of≤δ p.At this, the brightness value of input signal IS is in the scope of value [0.0~1.0].
In addition, the relation of above gray-scale transformation curve candidate G1~Gp, for the bigger gray-scale transformation curve candidate of subscript, under the less situation of input signal IS, perhaps about the less gray-scale transformation curve candidate of subscript, under the bigger situation of input signal IS, the relation of above gray-scale transformation curve candidate G1~Gp also can be false.Under such situation, almost not influence is because of less reason that image quality is exerted an influence.
Gray scale processing execution portion 114 possesses the gray-scale transformation curve candidate G1~Gp as 2 dimension LUT.That is, 2 dimension LUT are the brightness values and the selection signal Sm that selects luminance transformation curve candidate G1~Gp for the pixel of input signal IS, and the question blank (LUT) of brightness value of the pixel of gray scale processing signals CS is provided.Figure 51, represent this 2 the dimension LUT an example.2 dimension LUT141 shown in Figure 51 are matrixes of 64 row, 64 row, and each gray-scale transformation curve candidate G1~G64 is arranged at line direction (laterally).In matrix column direction (vertically), arrange the pixel value of gray scale processing signals CS of the value correspondence of the input signal IS that by the value of 6 of the high positions of the pixel value of the input signal IS of for example 10 bit representations, promptly is divided into 64 rank.The pixel value of gray scale processing signals CS is under the situation of [power function] at gray-scale transformation curve candidate G1~Gp, has for example interior value of scope of value [0.0~1.0].
(effect)
Action about each one is illustrated.Image segmentation portion 112 almost moves equally with the image segmentation portion 102 of Figure 44, and the original image that is transfused to as input signal IS is divided into a plurality of (n) image-region Pm (with reference to Figure 45).At this, original image cut apart quantity, than the visual processing unit 300 in the past shown in Figure 104 to cut apart quantity (for example cutting apart for 4~16) also many, for example cut apart etc. at 4800 that vertically are divided into 60 being horizontally divided into 80.
Select signal leading-out portion 113, from gray-scale transformation curve candidate G1~Gp, select the gray-scale transformation curve Cm that uses for each image-region Pm.Specifically, select signal leading-out portion 113, the average brightness value of the wide area image-region Em of computed image zone Pm according to the average brightness value that is calculated, carries out the arbitrary selection among gray-scale transformation curve candidate G1~Gp.That is, gray-scale transformation curve candidate G1~Gp is associated with the average brightness value of wide area image-region Em, and average brightness value is big more, selects the big more gray-scale transformation curve candidate G1~Gp of subscript.
At this, so-called wide area image-region Em and uses illustrated same of Figure 45 in (the 4th execution mode).That is, wide area image-region Em is to comprise the set of each image-region Pm at interior a plurality of image-regions, for example, is vertical 5 pieces at center with image-region Pm, the set of 25 image-regions of horizontal 5 pieces.In addition, can't obtain at peripheral vertical 5 pieces of image-region Pm, the wide area image-region Em of horizontal 5 pieces by the position of image-region Pm in some cases.For example, for the image-region P1 of the periphery that is positioned at original image, can't obtain peripheral vertical 5 pieces at image-region P1, the wide area image-region E 1 of horizontal 5 pieces.In this case, use zone and the original image overlapping areas as vertical 5 horizontal 5 pieces of piece at center, as wide area image-region E1 with image-region P1.
Select the selection result of signal leading-out portion 113 to be, be used as any the selection signal Sm output among expression gray-scale transformation curve candidate G1~Gp.More particularly, select signal Sm, (1~p) value is output as the subscript of gray-scale transformation curve candidate G1~Gp.
Gray scale processing execution portion 114, the brightness value of the pixel of the image-region Pm that input signal IS is comprised and select signal Sm as input uses for example dimension of 2 shown in Figure 51 LUT141, with the brightness value output of gray scale processing signals CS.
Gray correction portion 115, based on the selected greyscale transformation of the image-region zone for the periphery of locations of pixels and image-region Pm and image-region Pm, the brightness value of the pixel of the image-region Pm that gray scale processing signals CS is comprised is proofreaied and correct.For example, interior proportion by subtraction with location of pixels, for the gray-scale transformation curve Cm that is applied to the pixel that pixel region Pm comprised, with for the image-region of the periphery of image-region Pm selected gray-scale transformation curve proofread and correct, obtain the brightness value of the pixel after the correction.
Use Figure 52, further be described in detail at the action of gray correction portion 115.Gray-scale transformation curve Co, Cp, Cq, the Cr of Figure 52 presentation video zone Po, Pp, Pq, Pr (o, p, q, r are for cutting apart the positive integer below the quantity n (with reference to Figure 45)) selects gray-scale transformation curve candidate Gs, Gt, Gu, Gv (s, t, u, v are the following positive integer of candidate quantity p of gray-scale transformation curve).
At this, make that the position of pixel x (becoming brightness value [x]) of the image-region Po of the object that becomes gray correction is following carries out interior branch: with intracardiac being divided into [i:1-I] among the center of image-region Po and the image-region Pp, and with the intracardiac [j:1-j[that is divided among the center of image-region Po and the image-region Pq.In this case, obtain the brightness value [x '] of the pixel x after the gray correction, for [x ']={ (1-j) (1-i) [Gs[+ (1-j) is [Gt]+(j) (1-i) [Gu]+(j) (i) [Gv] (i)] { (x)/[Gs] }.In addition, make [Gs], [Gt], [Gu], [Gv], become, the brightness value under the situation of application gray-scale transformation curve candidate Gs, Gt, Gu, Gv for brightness value [x].
(visual processing method and visual handling procedure)
Figure 53 represents to illustrate the flow chart of the visual processing method in the visual processing unit 111.Visual processing method shown in Figure 53 is to realize by hardware in visual processing unit 111, carries out the method for the gray scale processing of input signal IS (with reference to Figure 49).In the visual processing method shown in Figure 53, input signal IS is with processed (the step S120~S126) of image as unit.As the original image that input signal IS is transfused to, be divided into a plurality of image-region Pm (1≤m≤n:n is the quantity of cutting apart of original image) (step S121), carry out gray scale by each image-region Pm and handle (step S122~S124).
In the processing of each image-region Pm, from gray-scale transformation curve candidate G1~Gp, select the gray-scale transformation curve Cm (step S122) that uses for each image-region Pm.Specifically, the average brightness value of the wide area image-region Em of image-region Pm is calculated, carry out arbitrary selection among gray-scale transformation curve candidate G1~Gp according to the average brightness value that is calculated.Gray-scale transformation curve candidate G1~Gp is associated with the average brightness value of wide area image-region Em, and average brightness value is big more, selects the big more gray-scale transformation curve candidate G1~Gp of subscript.At this,, omit explanation (with reference to above-mentioned (effect) hurdle) about wide area image-region Em.
The brightness value of the pixel of the image-region Pm that is comprised for input signal IS and the selection signal Sm of the selected gray-scale transformation curve candidate of step S122 among gray-scale transformation curve candidate G1~Gp, use for example dimension of 2 shown in Figure 51 LUT, with the brightness value output (step S123) of gray scale processing signals CS.And then, judge whether the processing about all images zone Pm finishes (step S124), judge dimension handle finished before, repeating step S122~S124 handles the quantity of cutting apart to original image.By more than, the processing of constipation beam images area unit.
The brightness value of the pixel of the image-region Pm that gray scale processing signals CS is comprised is proofreaied and correct (step S125) based on the selected gray-scale transformation curve of image-region for the periphery of locations of pixels and image-region Pm and image-region Pm.For example, will be applied to the gray-scale transformation curve Cm of the pixel that image-region Pm comprised and, proofread and correct, obtain the brightness value of the pixel after the correction with the interior proportion by subtraction of location of pixels for the selected gray-scale transformation curve of image-region of the periphery of image-region Pm.About the detailed content of proofreading and correct, omit explanation (with reference to above-mentioned (effect) hurdle, Figure 52).
By more than, the processing constipation bundle (step S126) of image as unit.
In addition, each step of the visual processing method shown in Figure 53 can realize as visual handling procedure by computer etc.
(effect)
By the present invention, just can obtain (effect) same substantially effect with above-mentioned (the 4th execution mode).Below, put down in writing the distinctive effect of the 5th execution mode.
(1)
For the selected gray-scale transformation curve Cm of each image-region Pm, the average brightness value that is based on wide area image-region Em is made.Therefore, even image-region Pm's is big or small less, also can carry out the sampling of enough brightness values.And even its result is for less image-region Pm, also can select to use suitable gray-scale transformation curve Cm.
(2)
Gray scale processing execution portion 114 has the 2 dimension LUT that make in advance.Therefore, can cut down gray scale and handle needed processing load, more particularly, the needed processing load of the making of gray-scale transformation curve Cm.Its result is to make the gray scale of image-region Pm handle needed processing high speed.
(3)
Gray scale processing execution portion 114 uses 2 dimension LUT to carry out gray scale and handles.2 dimension LUT read the storage device of hard disk that possesses from visual processing unit 11 or ROM etc., are used for gray scale and handle.Change by contents, thereby do not make the formation change of hardware can realize that just various gray scales handle the 2 dimension LUT that read.That is, the gray scale that can realize being more suitable for the characteristic of original image is handled.
(4)
Gray correction portion 115 proofreaies and correct the gray scale of using 1 gray-scale transformation curve Cm to carry out the pixel of the image-region Pm after gray scale is handled.Therefore, can access output signal OS after more suitable execution gray scale is handled.For example, can suppress the generation of blurred contour.And, in output signal OS, can further prevent the showy not nature that seems of joint on the border of each image-region Pm.
(variation)
The present invention is not to be defined in above-mentioned execution mode, also various distortion can be arranged in not breaking away from its purport scope.
(1)
In the above-described embodiment,,, be not limited to these situations, even other cut apart quantity and also can access same effect when effect of the present invention though, be 4800 and cut apart quantity as the example of cutting apart quantity of original image.In addition, there are compromise relation in treating capacity and visual effect that gray scale is handled with the relevant quantitative aspects of cutting apart.That is, can obtain if cut apart the then treating capacity visual effect (for example having suppressed the image of blurred contour etc.) that increase, better of gray scale processing of quantity increase.
(2)
In the above-described embodiment, though conduct constitutes number one example of the image-region of wide area image-region, be 25, effect of the present invention is not to be defined in these situations, even other number also can access same effect.
(3)
In the above-described embodiment, the 2 dimension LUT141 that are made up of the matrixes of 64 row 64 row are as 2 examples of tieing up LUT.At this, effect of the present invention is not the 2 dimension LUT that are defined in this size.For example, further also can be the matrix that many gray-scale transformation curve candidates are arranged at line direction.And, also can be that the pixel value of the gray scale processing signals CS of step that the pixel value of input signal IS is further the diminished value correspondence of dividing is arranged in the matrix column direction.Specifically, also can be with for example corresponding by each pixel value of the input signal IS of 10 bit representations, arrange the pixel value of gray scale processing signals CS.
If the size of 2 dimension LUT becomes big, then can carry out more suitable gray scale and handle, if diminish, then can cut down the internal memory of storage 2 dimension LUT etc.
(4)
In the above-described embodiment, illustrated, for example arranged the pixel value of gray scale processing signals CS of the value correspondence of the input signal IS that by the value of 6 of the high positions of the pixel value of the input signal IS of 10 bit representations, promptly is divided into 64 rank in the matrix column direction.At this, gray scale processing signals CS by gray scale processing execution portion 114, can be used as the matrix composition that is undertaken behind the linear interpolation by the value of 4 of the low levels of the pixel value of input signal IS and is output.Promptly, in the matrix column direction, for example arrange matrix composition by the value correspondence of 6 of the high positions of the pixel value of the input signal IS of 10 bit representations, the value that the low level of the pixel value of use input signal IS is 4, to the matrix composition of the value correspondence of 6 of the high positions of the pixel value of input signal IS with the value of 6 of the low levels of the pixel value of input signal IS is added the matrix composition (for example composition under the 1st row in Figure 51) of the value correspondence after [1], carry out linear interpolation, and export as gray scale processing signals OS.
Like this, even 2 dimension LUT141's (with reference to Figure 51) is big or small less, also can carries out more suitable gray scale and handle.
(5)
In the above-described embodiment, the average brightness value based on wide area image-region Em has been described, has selected to be suitable for the gray-scale transformation curve Cm of image-region Pm.At this, the system of selection of gray-scale transformation curve Cm is not limited to said method.For example, also can be based on the maximum brightness value of wide area image-region Em, perhaps minimum brightness value, selection is suitable for the gray-scale transformation curve Cm of image-region Pm.In addition, when the selection of gray-scale transformation curve Cm, selecting the value [Sm] of signal Sm, also can be the average brightness value of wide area image-region Em, maximum brightness value or minimum brightness value.In this case, be divided into each value behind 64 rank for the value that will select signal Sm to be got, G1~G4 is associated with the gray-scale transformation curve candidate.
In addition for example, also can be suitable for the gray-scale transformation curve Cm of image-region Pm according to following selection.That is, obtain average brightness value, obtain interim selection signal Sm ' about each image-region Pm according to each average brightness value about each image-region Pm.At this, select signal Sm ' temporarily, with the following target number of gray-scale transformation curve candidate G1~Gp as value.And then, each image-region that comprises about wide area image-region Em, value to interim selection signal Sm ' averages, obtain the value [Sm] of the selection signal Sm of image-region Pm, select the nearest integer conduct of order and the value [Sm] among gray-scale transformation curve candidate G1~Gp target candidate down, as gray-scale transformation curve Cm.
(6)
In the above-described embodiment, the average brightness value based on wide area image-region Em has been described, has selected to be suitable for the gray-scale transformation curve Cm of image-region Pm.At this, be not the simple average of wide area image-region Em, also can select to be suitable for the gray-scale transformation curve Cm of image-region Pm based on weighted average (weighted average).For example, shown in Figure 54, obtain the average brightness value of each image-region that constitutes wide area image-region Em, image-region Ps1, the Ps2 of the average brightness value that differs widely about the average brightness value that has with image-region Pm ... weight is diminished, perhaps it is got rid of, obtain the average brightness value of wide area image-region Em.
Like this, even when wide area image-region Em comprises unusual regional of brightness (when for example wide area image-region Em comprises the border of the different object of 2 kinds of brightness values), the brightness value of this abnormal area, the influence that selection brought for the gray-scale transformation curve Cm that is applied to image-region Pm is also less, and can carry out suitable gray scale and handle.
(7)
In the above-described embodiment, the existence of gray correction portion 115 is arbitrarily.Promptly, even under with the situation of gray scale processing signals CS as output, compare (with reference to Figure 104) with visual processing unit 300 in the past, can obtaining (effect) the described same effect with (the 4th execution mode), and with (effect) (1) and (2) the described same effect of (the 5th execution mode).
(8)
In the above-described embodiment, gray-scale transformation curve candidate G1~Gp has been described, has existed about the dull relation that reduces of subscript, for the brightness value of the pixel of all input signals, satisfy G1 〉=G2 〉=... the relation of 〉=Gp.At this, 2 dimension gray-scale transformation curve candidate G1~Gp that LUT possessed, for the part of the brightness value of the pixel of input signal IS, though do not satisfy G1 〉=G2 〉=... the relation of 〉=Gp also can.That is, all can there be cross one another relation in any among gray-scale transformation curve candidate G1~Gp.
For example, in darker night scene, there is tiny bright part etc. (in the night scene neon light part etc.), though the value of input signal IS is bigger, at the average brightness value of wide area image-region Em hour, the value of carrying out the picture signal after gray scale is handled is less to the influence that image quality produces.In this case, 2 dimension gray-scale transformation curve candidate G1~Gp that LUT possessed, for the part of the brightness value of the pixel of input signal IS, though do not satisfy G1 〉=G2 〉=... the relation of 〉=GP also can.That is, the value part less after gray scale is handled to the influence of image quality, the 2 dimension values that LUT preserved can be arbitrarily.
In addition, the value of preserving at 2 dimension LUT preferably for the input signal IS of identical value and the value of selecting signal Sm to be preserved, is also kept for input signal IS and is selected the value of signal Sm when being worth arbitrarily, dull increasing or the relation of dull minimizing.
And, in the above-described embodiment, 2 dimension gray-scale transformation curve candidate G1~Gp that LUT possessed have been described, be [power function].At this, gray-scale transformation curve candidate G1~Gp is not even be strict yet can as [power function] formulism.In addition, though the function of shapes such as S word, the word of falling S also can.
(9)
In visual processing unit 111, also can further possess description document data creating portion, it makes the description document data of the value of preserving as 2 dimension LUT.Specifically, description document data creating portion, constitute with gray-scale transformation curve leading-out portion 110 by the image segmentation portion in the visual processing unit 101 (with reference to Figure 44) 102, the set of many gray-scale transformation curves of made as the description document data, is kept among the 2 dimension LUT.
And, be kept at 2 dimensions each bar gray-scale transformation curve among the LUT, even also can with being associated by the input signal IS after the spatial manipulation.In this case, in visual processing unit 111, also image segmentation portion 112 and selection signal leading-out portion 113 can be replaced into the spatial manipulation portion that input signal IS is carried out spatial manipulation.
(10)
In the above-described embodiment, the brightness value of the pixel of input signal IS is not even be that value in the scope of value [0.0~1.0] also can.When input signal IS is transfused to as the value in other scope, also can and use the value value of being normalized to [0.0~1.0] in this scope.And,, in above-mentioned processing, also can carry out suitable change to handled value even do not carry out normalization.
(11)
Each bar of gray-scale transformation curve candidate G1~Gp also can be the input signal IS with dynamic range wideer than common dynamic scope to be carried out gray scale handle, with the gray-scale transformation curve of the gray scale processing signals CS output of common dynamic scope.
In recent years, by will use the good CCD of S/N that light quantity is covered, electronic shutter opens the method for length transducer 2 times or that use to have muting sensitivity, highly sensitive pixel etc., the exploitation that can handle than the machine of the dynamic range of wideer 1~3 order of magnitude of general dynamic range is improving.
, have under the situation of the dynamic range wideer thereupon, also pursue the suitable gray scale of carrying out and handle than general dynamic range (for example signal in the scope of value (1.0~1.0)) at input signal IS.
At this, shown in Figure 55,, also use the gray-scale transformation curve of the gray scale processing signals CS output that will be worth [0.0~1.0] for the range input signal IS of exceedance [0.0~1.0].
Like this,, also can carry out suitable gray scale and handle for the input signal IS of dynamic range with broad, can be with the gray scale processing signals CS output of general dynamic range
In addition, in the present embodiment, put down in writing " pixel value of gray scale processing signals CS when gray-scale transformation curve candidate G1~Gp is [power function], has for example interior value of scope of value [0.0~1.0] ".At this, the pixel value of gray scale processing signals CS is not limited to this scope.For example, for the input signal IS of value [0.0~1.0], gray-scale transformation curve candidate G1~Gp also can carry out dynamic range compression.
(12)
In the above-described embodiment, " gray scale processing execution portion 114 has gray-scale transformation curve candidate G1~Gp as 2 dimension LUT " has been described.At this, gray scale processing execution portion 114 also can have and preserve the 1 dimension LUT that is used for the parameter of curve that gray-scale transformation curve candidate G1~Gp is determined and selects the relation between the signal Sm.
(formation)
Figure 56 represents the block diagram as the structure of the gray scale processing execution portion 114 of the variation of gray scale processing execution portion 114.Gray scale processing execution portion 144, with input signal IS with select signal Sm as input, will be as the gray scale processing signals CS that carries out the input signal IS after gray scale is handled as output.Gray scale processing execution portion 144 possesses parameter of curve efferent 145 and operational part 148.
Parameter of curve efferent 145 is made of 1LUT146 and 2LUT147.1LUT146 and 2LUT147 will select signal Sm as input, will select parameter of curve P1 and the P2 of the gray-scale transformation curve candidate Gm of signal Sm appointment to export respectively.
Operational part 148, with parameter of curve P1 and P2, input signal IS as input, with gray scale processing signals CS as output.
(about 1 dimension LUT)
1LUT146 and 2LUT147 are the 1 dimension LUT that preserves the value of each parameter of curve P1 that selects signal Sm correspondence and P2.Before being described in detail, describe at the content of parameter of curve P1 and P2 at 1LUT146 and 2LUT147.
Use Figure 57, to parameter of curve P1 and P2, and gray-scale transformation curve candidate G1~Gp between relation describe.Figure 57 represents gray-scale transformation curve candidate G1~Gp.At this, gray-scale transformation curve candidate G1~Gp exists about the dull relation that reduces of subscript, for the brightness value of the pixel of all input signal IS, satisfy G1 〉=G2 〉=... the relation of 〉=Gp.In addition, about the bigger gray-scale transformation curve candidate of subscript, at input signal IS hour, perhaps about the less gray-scale transformation curve candidate of subscript, when input signal IS is big etc., the relation of above gray-scale transformation curve candidate G1~Gp also can be false.
Parameter of curve P1 and P2 are output as the value for the gray scale processing signals CS of the setting of input signal IS.Promptly, under by the situation of selecting signal Sm appointment gray-scale transformation curve candidate Gm, the value of parameter of curve P1, the value [R1m] that is used as the corresponding gray-scale transformation curve candidate Gm of the setting [X1] of input signal IS is output, the value of parameter of curve P2, the value [R2m] that is used as the corresponding gray-scale transformation curve candidate Gm of the setting [X2] of input signal IS is output.At this, value [X2] is the big value of ratio [X1].
Then, describe at 1LUT146 and 2LUT147.
1LUT146 and 2LUT147, the parameter of curve P1 of in store selection signal Sm correspondence and the value of P2 respectively.More particularly, for example for as 6 signal provided that each selects signal Sm, respectively with the value of 6 offer curves parameter P1 and P2.At this, be not limited to this for the figure place of selecting signal Sm or parameter of curve P1 and P2 to be guaranteed.
Use Figure 58, at parameter of curve P1 and P2, describe with relation between the selection signal Sm.Figure 58, the variation of the parameter of curve P 1 of expression selection signal Sm correspondence and the value of P2.Among 1LUT146 and the 2LUT147, preserve each parameter of curve P1 that selects signal Sm correspondence and the value of P2.For example, as the value of the parameter of curve P1 that selects signal Sm correspondence, the value of preserving [R1m], as the value of parameter of curve P2, save value [R2m].
Above 1LUT146 and 2LUT147 are for the selection signal Sm that is imported, with parameter of curve P1 and P2 output.
(about operational part 148)
Operational part 148 is based on parameter of curve P1 that is obtained and P2 (value [R1m] and value [R2m]), with the gray scale processing signals CS derivation of input signal IS correspondence.Below put down in writing concrete order.At this, the value of input signal IS is provided by the scope of value [0.0~1.0].And gray-scale transformation curve candidate G1~Gp will be for carrying out the scope of greyscale transformation one-tenth value [0.0~1.0] by the input signal IS that scope provided of value [0.0~1.0].In addition, be not limited at input signal IS under the situation of this scope, the present invention also can use.
At first, operational part 148 is to the value of input signal IS, compare with value [X1], [X2] of regulation.
The value of input signal IS (for value [x]) is during [0.0] more than and less than [X1], on the straight line of the connection initial point in Figure 57 and coordinate ([X1], [R1m]), and the value of the gray scale processing signals CS of the value of obtaining [x] correspondence (being to be worth [y]).More particularly, value [y] is obtained by following formula [y]=([x]/[X1]) * [R1m].
When the value of input signal IS is [X] during more than 1 and less than [X2], on the straight line of the connection coordinate in Figure 57 ([X1], [R1m]) and coordinate ([X2], [R2m]), the value [y] of the value of obtaining [x] correspondence.More particularly, value [y] is obtained by following formula [y]=[R1m]+{ ([R2m]-[R1m])/([X2]-[X1]) } * ([x]-[X1]).
When the value of input signal IS is when [1.0] are following more than [X2], on the straight line of the connection coordinate in Figure 57 ([X2], [R2m]) and coordinate ([1.0], [1.0]), the value [y] of the value of obtaining [x] correspondence.More particularly, value [y] is obtained by following formula [y]=[R2m]+{ [1.0]-[R2m])/([1.0]-[X2]) } * ([x]-[X2]).
By above computing, operational part 148 is with the gray scale processing signals CS derivation of input signal IS correspondence.
(gray scale processing method, program)
Above-mentioned processing is as the gray scale handling procedure, carries out by computer etc.The gray scale handling procedure is a kind of program that is used to make computer to carry out the gray scale processing method of the following stated.
Gray scale processing method is a kind of acquisition input signal IS and selection signal Sm, and the method with gray scale processing signals Cs output is characterized in that, uses 1 dimension LUT that input signal IS is carried out gray scale and handles this point.
At first, when obtaining to select signal Sm, curve of output P1 and P2 from 1LT146 and 2LUT147.1LUT146,2LUT147, parameter of curve P1 and P2, detailed.
And then, based on parameter of curve P1 and P2, carry out the gray scale of input signal IS and handle.Therefore the detailed content that gray scale is handled omits explanation owing to put down in writing in the explanation about operational part 148.
By above gray scale processing method, with the gray scale processing signals CS derivation of input signal IS correspondence.
(effect)
In gray scale processing execution portion 144, possess and be not 2 dimension LUT but 21 dimension LUT as the variation of gray scale processing execution portion 114.Therefore, can cut down the memory capacity that is used for the storing queries table.
(variation)
(1)
In above-mentioned variation, " value of parameter of curve P1 and P2 is the value of gray-scale transformation curve candidate Gm of the setting correspondence of input signal IS " has been described.At this, parameter of curve P2 and P2 also can be other parameters of curve of gray-scale transformation curve candidate Gm.Below, specifically be illustrated.
(1-1)
Parameter of curve is can be the slope of gray-scale transformation curve candidate Gm.Use Figure 57 specifically to describe.When specifying gray-scale transformation curve candidate Gm by selection signal Sm, the value of parameter of curve P1, it is the value [K1m] of the slope of the gray-scale transformation curve candidate Gm in the prescribed limit [0.0~X1] of input signal IS, the value of parameter of curve P2 is the value [K2m] of the slope of the gray-scale transformation curve candidate Gm in the prescribed limit [X1~X2] of input signal IS.
Use Figure 59, at parameter of curve P1 and P2, describe with relation between the selection signal Sm.Figure 59, the variation of the parameter of curve P1 of expression selection signal Sm correspondence and the value of P2.Among 1LUT146 and the 2LUT147, preserve each parameter of curve P 1 that selects signal Sm correspondence and the value of P2.For example, as the value of the parameter of curve P1 that selects signal Sm correspondence, the value of preserving [K1m], as the value of parameter of curve P2, the value of preserving [K2m].
By above 1LUT146 and 2LUT147, thereby for the selection signal Sm that is imported, with parameter of curve P1 and P2 output.
In operational part 148, based on parameter of curve P1 that is obtained and P2, with the gray scale processing signals CS derivation of input signal IS correspondence.Below put down in writing concrete order.
At first, operational part 148 is to the value of input signal IS, compare with value [X1], [X2] of regulation.
When the value of input signal IS (for value [x]) is during more than [0.0] and less than [X1], connection initial point in Figure 57 and coordinate ([X1], [K1m] * [X1]) on the straight line of (following note is done [Y1]), the value of the gray scale processing signals CS that the value of obtaining [x] is corresponding (value [y]).More particularly, value [y] is obtained by following formula [y]=[K1m] * [x].
When the value of input signal IS is during more than [X1] and less than [X2], on the straight line of the connection coordinate in Figure 57 ([X1], [Y1]) and coordinate ([X2], the following note of [K1m] * [X1]+[K2m] * ([X2]-[X1]) is done [Y2]), the value [y] of the value of obtaining [x] correspondence.More particularly, value [y] is obtained by following formula [y]=[Y1]+[K2m] * ([x]-[X1]).
When the value of input signal IS is when [1.0] are following more than [X2], on the straight line of the connection coordinate in Figure 57 ([X2], [Y2]) and coordinate (1.0,1.0), the value [y] of the value of obtaining [x] correspondence.More particularly, value [y] is obtained by following formula [y]=[Y2]+{ ([1.0]-[Y2])/([1.0]-[X2]) } * ([x]-[X2]).
By above computing, operational part 148 is with the gray scale processing signals CS derivation of input signal IS correspondence.
(1-2)
Parameter of curve can be the coordinate on the gray-scale transformation curve candidate Gm.Use Figure 60 specifically to describe.When specifying gray-scale transformation curve candidate Gm by selection signal Sm, the value of parameter of curve P1, be the one-tenth score value [Mm] of coordinate one side on the gray-scale transformation curve candidate Gm, the value of parameter of curve P2 is the one-tenth score value [Nm] of the coordinate the opposing party on the gray-scale transformation curve candidate Gn.And then gray-scale transformation curve candidate G1~Gp is that all are through coordinate (X1, curve Y1).
Use Figure 61, at parameter of curve P1 and P2, describe with relation between the selection signal Sm.Figure 61 represents to select the variation of the value of the parameter of curve P1 of signal Sm correspondence and P2.Among 1LUT146 and the 2LUT147, preserve each parameter of curve P1 that selects signal Sm correspondence and the value of P2.For example, as the value of the parameter of curve P1 that selects signal Sm correspondence, the value of preserving [Mm], as the value of parameter of curve P2, the value of preserving [Nm].
By above 1LUT146 and 2LUT147, to the selection signal Sm that is imported, with parameter of curve P1 and P2 output.
In operational part 148, by with use the illustrated same processing of variation of Figure 57, according to input signal IS gray scale processing signals CS is derived.Omit detailed explanation.
(1-3)
Above variation is an example, and parameter of curve P1 and P2 also can be the other parameters of curve of gray-scale transformation curve Gm.
And the number of parameter of curve is not limited to above-mentioned.Can be still less, also can be more.
In explanation, put down in writing gray-scale transformation curve candidate G1~Gp and be the computing under the situation of the curve that the line by straight line constitutes about operational part 148.At this, under situation about the coordinate on gray-scale transformation curve candidate G1~Gp being provided, make the level and smooth curve (curve fit) that the coordinate that provided is provided as parameter of curve, can use the curve of made, carry out greyscale transform process.
(2)
In above-mentioned variation, " parameter of curve efferent 145 is made of 1LUT146 and 2LUT147 " has been described.At this, parameter of curve efferent 145 also can not possess the LUT of the value of the parameter of curve P1 of value correspondence that preserve to select signal Sm and P2.
In this case, parameter of curve efferent 145, the value of computing parameter of curve P1 and P2.More particularly, parameter of curve efferent 145, the parameter of storing the curve of representing parameter of curve P1 shown in Figure 58, Figure 59, Figure 61 etc. and P2.Parameter of curve efferent 145 according to institute's stored parameters, advances definite to the curve of parameter of curve P1 and P2.And then, the curve of use parameter of curve P1 and P2, the parameter of curve P1 of signal Sm correspondence and the value of P2 are selected in output.
At this, be used for parameter that the curve of parameter of curve P1 and P2 is determined, be coordinate, slope of a curve, bending on the curve etc.For example, parameter of curve efferent 145, each 2 the coordinate on the curve of the parameter of curve P1 of storage shown in Figure 58 and P2 uses the straight line that connects this coordinate of 2, as the curve of parameter of curve P1 and P2.
At this, when the curve of parameter of curve P1 and P2 being determined, not only can use near linear according to parameter, also can use approximate broken line, curve of approximation etc.
Like this, do not use to be used for LUT is carried out memory storing, just parameter of curve can be exported.Promptly.The further capacity of the memory that possesses of cutting device.
(the 6th execution mode)
(as the feature of the visual processing unit 121 of the 6th execution mode)
At visual processing unit 121, use Figure 62~Figure 64 to describe as the present invention's the 6th execution mode.Visual processing unit 121 is built-in or be connected in the machine that for example computer, television set, digital camera, portable phone, PDA etc. handle image, carries out the device that the gray scale of image is handled.Visual processing unit 121 has and is characterised in that, in advance to many gray-scale transformation curves as the LUT storage, switches this point of use by each pixel that becomes the object that gray scale handles.
(formation)
Figure 62, the block diagram of the structure of the visual processing unit 121 of expression explanation.Visual processing unit 121 possesses: image segmentation portion 122, selection signal leading-out portion 123, gray scale handling part 130.Image segmentation portion 122 as input, will be divided into image-region Pm (1≤m≤n, n are the quantity of cutting apart of the original image) output after a plurality of with input signal IS as the original image that input signal IS is transfused to.Select signal leading-out portion 123, will be used for selecting the selection signal Sm output of gray-scale transformation curve Cm for each image-region Pm.Gray scale handling part 130 possesses: select signal correction portion 124 and gray scale handling part 125.Select signal correction portion 124, will select signal Sm, the selection signal SS output of each pixel of the signal after will proofreading and correct as selection signal Sm each each image-region Pm as input.Gray scale processing execution portion 125, possess many gray-scale transformation curve G1~Gp (p is a candidate quantity) as 2 dimension LUT, the selection signal SS of input signal IS and each pixel as input, will be carried out output signal OS after gray scale is handled as output at each pixel.
(about the gray-scale transformation curve candidate)
About gray-scale transformation curve candidate G1~Gp and since with (the 5th execution mode) in use illustrated same of Figure 50, therefore omit explanation.But in the present embodiment, gray-scale transformation curve candidate G1~Gp is a curve of giving the relation between the light and shade value of pixel of the brightness value of pixel of input signal IS and gray scale processing signals OS.
Gray scale processing execution portion 125 possesses gray-scale transformation curve candidate G1~Gp as 2 dimension LUT.That is, 2 dimension LUT are for the brightness value of the pixel of selecting input signal IS and the selection signal SS of gray-scale transformation curve candidate G1~Gp, and the question blank (LUT) of brightness value of the pixel of output signal OS is provided.Concrete example and since with (the 5th execution mode) in use illustrated same substantially of Figure 51, therefore in this description will be omitted.But, in the present embodiment,, arrange the pixel value of output signal OS of value correspondence of 6 of the high positions of the pixel value of the input signal IS of 10 bit representations for example in the matrix column direction.
(effect)
Action at each one is illustrated.Image segmentation portion 122 substantially similarly moves with the image segmentation portion 102 of Figure 44, will be divided into a plurality of (n) image-region Pm (with reference to Figure 45) as the original image that input signal IS is transfused to.At this, original image cut apart quantity, also want many such as the quantity (for example cutting apart for 4~16) of cutting apart of the visual processing unit 300 in the past shown in Figure 104, for example, cut apart etc. at 4800 that vertically are divided into 60 being horizontally divided into 80.
Select signal leading-out portion 123, from gray-scale transformation curve candidate G1~Gp, select for the greyscale transformation of each image-region Pm that Cm that goes down on one's knees.Specifically, select signal leading-out portion 123, the average brightness value of the wide area image-region Em of computed image zone Pm according to the average brightness value that is calculated, carries out the arbitrary selection among gray-scale transformation curve candidate G1~Gp.That is, gray-scale transformation curve candidate G1~Gp is associated with the average brightness value of wide area image-region Em, and average brightness value is big more, selects the big more gray-scale transformation curve candidate G1~Gp of subscript.
At this, wide area image-region Em and uses illustrated same of Figure 45 in (the 4th execution mode).That is, wide area image-region Em is to comprise the set of each image-region Pm at interior a plurality of image-regions, for example, and with centrical vertical 5 pieces of image-region Pm, the set of 25 image-regions of horizontal 5 pieces.In addition, can't obtain at peripheral vertical 5 pieces of image-region Pm, the wide area image-region Em of horizontal 5 pieces by the position of image-region Pm in some cases.For example, for the image-region P1 of the periphery that is positioned at original image, can't obtain peripheral vertical 5 pieces at image-region P1, the wide area image-region E1 of horizontal 5 pieces.In this case, use zone and the original image overlapping areas as vertical 5 horizontal 5 pieces of piece at center, as wide area image-region E1 with image-region P1.
Select the selection result of signal leading-out portion 123 to be, be used as any the selection signal Sm output among expression gray-scale transformation curve candidate G1~Gp.More particularly, select signal Sm, (1~p) value is output as the subscript of gray-scale transformation curve candidate G1~Gp.
Select signal correction portion 124, select the correction of signal Sm by using each that exported for each image-region Pm, thereby will be used for selection signal SS output by each pixel of each the pixel selection gray-scale transformation curve that constitutes input signal IS.For example, with the value that the interior proportion by subtraction of location of pixels is proofreaied and correct the selection signal of being exported for the ambient image regions of image-region Pm and image-region Pm, obtain selection signal SS for the pixel correspondence that comprises among the image-region Pm.
Use Figure 63, further be described in detail at the action of selecting signal correction portion 124.Figure 63, expression for image-region Po, Pp, Pq, Pr (q, r is to cut apart the following positive integer of quantity n (with reference to Figure 45) for o, p) output select the state of signal So, Sp, Sq, Sr.
At this, the position that makes the object pixels x that becomes gray correction is according to dividing in following: with intracardiac being divided into [i:1-i] among the center of image-region Po and the image-region Pp, and with intracardiac being divided into [j:1-j] among the center of image-region Po and the image-region Pq.In this case, obtain the value [SS] of the selection signal SS of pixel x correspondence, be [SS]={ (1-j) (1-i) [So]+(1-j) (i) [Sp]+(j) (1-i) [Sq]+(j) (i) [Sr] }.In addition, make [So], [Sp], [Sq], [Sr], for selecting the value of signal So, Sp, Sq, Sr.
Gray scale processing execution portion 125, the brightness value of the pixel that input signal IS is comprised and select signal SS as input uses for example dimension of 2 shown in Figure 51 LUT141, with the brightness value output of output signal OS.
In addition, select the value [SS] of signal SS, be not with the subscript of 2 gray-scale transformation curve candidate G1~Gp of possessing of dimension LUT141 (1~p) equate value the time, then in the gray scale of input signal IS is handled, use with the nearest integer of value [SS] to be target gray-scale transformation curve G1~Gp down.
(visual processing method and visual handling procedure)
Figure 64 is the flow chart of the visual processing method in the visual processing unit 121 of explanation expression.Visual processing method shown in Figure 64 realizes by the hardware in the visual processing unit 121, is the method for carrying out the gray scale processing of input signal IS (with reference to Figure 62).In the visual processing method shown in Figure 64, input signal IS is with processed (the step S130~S137) of image as unit.The original image that is transfused to as input signal IS, be divided into a plurality of image-region Pm (1≤m≤n:n is the quantity of cutting apart of original image) (step S131), select gray-scale transformation curve Cm (step S132~S133) by each image-region Pm, based on the selection signal Sm that is used for selecting gray-scale transformation curve Cm by each image-region Pm, press each pixel of original image, select gray-scale transformation curve, carry out gray scale with pixel unit and handle (step S134~S136).
About each step, specified.
For each image-region Pm, from gray-scale transformation curve candidate G1~Gp, select gray-scale transformation curve Cm (step S132).Specifically, the average brightness value of the wide area image-region Em of computed image zone Pm according to the average brightness value that is calculated, carries out the arbitrary selection among gray-scale transformation curve candidate G1~Gp.Gray-scale transformation curve candidate G1~Gp is associated with the average brightness value of wide area image-region Em, and average brightness value is big more, selects the big more gray-scale transformation curve candidate G1~Gp of subscript.At this,, omit explanation (with reference to above-mentioned (effect) hurdle) about wide area image-region Em.The result who selects is, is output as any the selection signal Sm among expression gray-scale transformation curve candidate G1~Gp.More particularly, select signal Sm, (1~p) value is output as the subscript of gray-scale transformation curve candidate G1~Gp.And then, judge whether the processing about all images zone Pm finishes (step S133), be judged to be before processing finished, repeating step S132~S133 handles the quantity of cutting apart to original image.By more than, the processing constipation bundle of image-region unit.
By using correction, will be used for selection signal SS output (step S134) by each pixel of each the pixel selection gray-scale transformation curve that constitutes input signal IS for each selection signal Sm of each image-region Pm output.For example, with the value that the interior proportion by subtraction of location of pixels is proofreaied and correct the selection signal of exporting for the ambient image regions of image-region Pm and image-region Pm, obtain the selection signal SS of the pixel correspondence that comprises among the image-region Pm.About the detailed content of proofreading and correct, omit explanation (with reference to above-mentioned (effect) hurdle, with reference to Figure 63).
The brightness value of the pixel that input signal IS is comprised and select signal SS as input uses for example dimension of 2 shown in Figure 51 LUT, with the brightness value output (step S135) of output signal OS.And then, judge whether the processing about all pixels finishes (step S136), before determination processing finishes, to the processing several of pixel repeating step S134~S136.By more than, the processing of constipation bundle pixel unit.
In addition, each step of the visual processing method shown in Figure 64 can be by computer etc., realizes as visual handling procedure.
(effect)
By the present invention, just can obtain (effect) same substantially effect of above-mentioned " the 4th execution mode " and " the 5th execution mode ".Below, put down in writing the distinctive effect of the 6th execution mode.
(1)
For the selected gray-scale transformation curve Cm of each image-region Pm, the average brightness value that is based on wide area image-region Em is made.Therefore, even the big or small less of image-region Pm also can carry out the sampling of enough brightness values.And its result is for less image-region Pm, also can select suitable gray-scale transformation curve Cm.
(2)
Select signal correction portion 124, by correction, with the selection signal SS output of each pixel based on the selection signal Sm that is output with image-region unit.Constitute the pixel of the original image of input signal IS, use the gray-scale transformation curve candidate G1~Gp of the selection signal SS appointment of each pixel, carry out gray scale and handle.Therefore, can access the output signal OS that carries out more suitably after gray scale is handled.For example, can suppress the generation of blurred contour.And, in output signal OS, can further prevent the showy not nature of joint on the border of each image-region Pm.
(3)
Gray scale processing execution portion 125 has the 2 dimension LUT that make in advance.Therefore, can cut down gray scale and handle needed processing load, more particularly, can cut down the needed processing load of making of gray-scale transformation curve Cm.Its result is to make gray scale handle high speed.
(4)
Gray scale processing execution portion 125 uses 2 dimension LUT to carry out gray scale and handles.At this, read the content of 2 dimension LUT the storage device of hard disk that possesses from visual processing unit 121 or ROM etc., in handling, gray scale uses.Change by contents, handle thereby the formation that does not change hardware just can realize various gray scales to the 2 dimension LUT that read.That is, can realize that the gray scale that the characteristic than original image is more suitable for handles.
(variation)
The present invention is not to be defined in above-mentioned execution mode, also various distortion can be arranged in not breaking away from its purport scope.For example can be applicable to the 6th execution mode with above-mentioned (the 5th execution mode) (variation) same substantially distortion.Especially, in (10)~(12) of (the 5th execution mode) (variation), read gray scale processing signals CS to be changed read to be output signal OS, thereby can use equally into selecting signal SS by selecting signal Sm to change.
Below, put down in writing the distinctive variation of the 6th execution mode.
(1)
In the above-described embodiment, the 2 dimension LUT141 that will be made up of the matrix of 64 row 64 row are as 2 examples of tieing up LUT.At this, effect of the present invention is not the 2 dimension LUT that are defined in this size.For example, also can be the matrix that more gray-scale transformation curve candidate is arranged at line direction.And, also can be that the pixel value of the output signal OS of step that the pixel value of input signal IS is further the diminished value correspondence of dividing is arranged in the matrix column direction.Specifically, also can be with for example corresponding by each pixel value of the input signal IS of 10 bit representations, arrange the pixel value of output signal OS.
If the size of 2 dimension LUT becomes big, then can carry out more suitable gray scale and handle, if diminish, then can cut down the internal memory of storage 2 dimension LUT etc.
(2)
In the above-described embodiment, the value [SS] of selecting signal SS has been described, be not with the subscript of 2 gray-scale transformation curve candidate G1~Gp of possessing of dimension LUT141 (with reference to Figure 51) (1~p) equate value the time, in the gray scale of input signal IS is handled, use the nearest integer of order and value [SS] to be target gray-scale transformation curve candidate G1~Gp down.At this, when the value [SS] of selecting signal SS, be not with 2 the dimension gray-scale transformation curve candidate G1~Gp that LUT141 possessed subscript (1~p) equate value the time, the maximum integer (k) of using order to surpass the value [SS] of selecting signal SS is the (1≤k≤p-1) of target gray-scale transformation curve candidate Gk down, the smallest positive integral (k+1) that surpasses [SS] with order is carried out the pixel value of the input signal IS after gray scale is handled for the both sides of target gray-scale transformation curve candidate Gk+1 down, use the following value of decimal point of the value [SS] of selecting signal SS, be weighted average (interior branch), output signal OS is exported.
(3)
In the above-described embodiment, illustrated, for example arranged pixel value by the output signal OS of the value correspondence of 6 of the high positions of the pixel value of the input signal IS of 10 bit representations in the matrix column direction.At this, output signal OS can pass through gray scale processing execution portion 125, is used as the matrix composition that carries out behind the linear interpolation with the value of 4 of the low levels of the pixel value of input signal IS and is output.Promptly, in the matrix column direction, for example arrange matrix composition by the value correspondence of 6 of the high positions of the pixel value of the input signal IS of 10 bit representations, the value that the low level of the pixel value of use input signal IS is 4, to the matrix composition of the value correspondence of 6 of the high positions of the pixel value of input signal IS with the value of 6 of the low levels of the pixel value of input signal IS is added the matrix composition (for example composition under the 1st row in Figure 51) of the value correspondence after [1], carry out linear interpolation, and export as gray scale processing signals OS.
Like this, even 2 dimension LUT141's (with reference to Figure 51) is big or small less, also can carries out more suitable gray scale and handle.
(5)
In the above-described embodiment, the average brightness value based on wide area image-region Em has been described, the selection signal Sm of output image zone Pm correspondence.At this, select the output intent of signal Sm, be not limited to this method.For example, also can be based on the maximum brightness value of wide area image-region Em, perhaps minimum brightness value, the selection signal Sm of output image zone Pm correspondence.In addition, selecting the value [Sm] of signal Sm, also can be the average brightness value of wide area image-region Em, maximum brightness value or minimum brightness value.
In addition for example, also can be according to the selection signal Sm of following output image zone Pm correspondence.That is, obtain average brightness value, obtain interim selection signal Sm ' about each image-region Pm according to each average brightness value about each image-region Pm.At this, select signal Sm ' temporarily, with the following target number of gray-scale transformation curve candidate G1~Gp as value.And then each image-region that comprises about wide area image-region Em averages the value of interim selection signal Sm ', as the selection signal Sm of image-region Pm.
(5)
In the above-described embodiment, the average brightness value based on wide area image-region Em has been described, the selection signal Sm of output image zone Pm correspondence.At this, be not the simple average of wide area image-region Em, also can be based on weighted average (weighted average), the selection signal Sm of output image zone Pm correspondence.Use illustrated same of Figure 54 in details and above-mentioned (the 5th execution mode), obtain the average brightness value of each image-region that constitutes wide area image-region Em, image-region Ps1, the Ps2 of the average brightness value that differs widely about the average brightness value that has with image-region Pm ... weight is diminished, obtain the average brightness value of wide area image-region Em.
Like this, even when wide area image-region Em comprises unusual regional of brightness (when for example wide area image-region Em comprises the border of the different object of 2 kinds of brightness values), the brightness value of this abnormal area, also less for the influence that output brought of selecting signal Sm, and can carry out the output of suitable selection signal Sm.
(6)
In visual processing unit 121, can further possess description document data creating portion, it makes the description document data of the value of preserving as 2 dimension LUT.Specifically, description document data creating portion, constitute by the image segmentation portion 102 in the visual processing unit 101 (with reference to Figure 44) and gray-scale transformation curve leading-out portion 110, the set of many gray-scale transformation curves of made as the description document data, is kept among the 2 dimension LUT.
And, be kept at 2 dimensions each bar gray-scale transformation curve among the LUT, also can even be associated with input signal IS after the spatial manipulation.In this case, in visual processing unit 121, also image segmentation portion 122 and selection signal leading-out portion 123 and selection signal correction portion 124 can be replaced into the spatial manipulation portion that input signal IS is carried out spatial manipulation.
(the 7th execution mode)
Use Figure 65~Figure 71, describe at visual processing unit 161 as the 7th execution mode of the present invention.
Visual processing unit 161 shown in Figure 65 is devices of visual processing such as the spatial manipulation of carrying out picture signal, gray scale processing.Visual processing unit 161 is machines that for example image of computer, television set, digital camera, portable phone, PDA, printer, scanner etc. is handled, and constitutes the device and the image processing apparatus of the look processing of carrying out picture signal.
Visual processing unit 161 is to use picture signal and picture signal is imposed the device of the visual processing of the blurred signal after the spatial manipulation (fuzzy filter processing), and it has feature aspect spatial manipulation.
In the past, when the neighboring pixel that uses object pixel was derived blurred signal, if neighboring pixel comprises the pixel that differs widely with object pixel concentration, then blurred signal can be subjected to the influence of the different pixel of concentration.That is, near the pixel in to image the object edge is carried out under the situation of spatial manipulation, and script is not the influence that the pixel at edge can be subjected to the concentration at edge.Therefore, by this spatial manipulation, can cause generation of blurred contour for example etc.
Therefore, pursuit is carried out spatial manipulation with the content-adaptive of image.To this, for example the spy opens flat 10-75395 communique, makes the different a plurality of blurred signals of fog-level, and by synthetic or switch each blurred signal, thereby blurred signal that will be suitable is exported.Like this, its purpose is, the filter size of spatial manipulation is changed the influence of the pixel that inhibition concentration is different.
On the other hand, synthetic or switch each blurred signal owing to make a plurality of blurred signals in above-mentioned communique, the therefore circuit scale in the device or handle load and become big.
Therefore, in the visual processing unit 161 as the 7th execution mode of the present invention, its purpose is suitable blurred signal output, and the circuit scale in the cutting device, perhaps handles load.
(visual processing unit 161)
Figure 65, expression is carried out visual processing to picture signal (input signal IS), with the basic comprising of the visual processing unit 161 of visual processing image (output signal OS) output.Visual processing unit 161 possesses: spatial manipulation portion 162, and its brightness value by each pixel of the original image that obtains as input signal IS is carried out spatial manipulation, and unsharp signal US is exported; With visual handling part 163, input signal IS and unsharp signal US that it uses about same pixel carry out the visual processing of original image, and output signal OS is exported.
(spatial manipulation portion 162)
Use Figure 66, describe at the spatial manipulation of spatial manipulation portion 162.Spatial manipulation portion 162 is according to input signal, obtains to become the pixel value of the pixel (below be called neighboring pixel 166) of the neighboring area of the object pixel 165 of object of spatial manipulation and object pixel 165.
Neighboring pixel 166 is the pixels that are positioned at the neighboring area of object pixel 165, is to be the pixel that comprises in the neighboring area of vertical 9 pixels of center deployment, horizontal 9 pixels with object pixel 165.In addition, the size of neighboring area is not to be defined in this situation, can be littler, and also can be bigger.And neighboring pixel 166 is according to the distance of distance object pixel 165, from beginning to be divided into the 1st neighboring pixel 167 and the 2nd neighboring pixel 168 nearby.In Figure 66, the 1st neighboring pixel 167 is to be the pixel that the zone of vertical 5 pixels at center, horizontal 5 pixels comprises with object pixel 165.And then the 2nd neighboring pixel 168 is the pixels that are positioned at the periphery of the 1st neighboring pixel 167.
Spatial manipulation portion 162 carries out filtering operation for object pixel 165.In filtering operation, the pixel value of object pixel 165 and neighboring pixel 166, based on difference and the distance of object pixel 165 with the pixel value of neighboring pixel 166, the right to use is weighted on average.Weighted average is calculated based on following formula F=(∑ [Wij] * [Aij]/∑ [Wij]).At this, [Wij] is in object pixel 165 and neighboring pixel 166, is positioned at the weight coefficient of the pixel of the capable j row of i, and [Aij] is in object pixel 165 and neighboring pixel 166, is positioned at the pixel value of the pixel of the capable j row of i.And " ∑ " is meant the calculating about the total of each pixel of object pixel 165 and neighboring pixel 166.
Use Figure 67 to describe for weight coefficient [Wij].Weight coefficient [Wij] be based on the pixel value between object pixel 165 and the neighboring pixel 166 difference and the distance and the decision value.More particularly, the absolute value of the difference of pixel value is big more then composes with more little weight coefficient.Perhaps, big more then tax of distance with more little weight coefficient.
For example, for object pixel 165, weight coefficient [Wij] is value [1].
For in the 1st neighboring pixel 167, have and the pixel value of object pixel 165 between the pixel of the absolute value of the difference pixel value littler than the threshold value of regulation, weight coefficient [Wij] is value [1].For in the 1st neighboring pixel 167, the pixel of pixel value that absolute value with difference is bigger than the threshold value of regulation, weight coefficient [Wij] is value [1/2].That is, even the pixel that comprises in the 1st neighboring pixel 167, according to the pixel value tax with weight coefficient also can be different.
For in the 2nd neighboring pixel 168, have and the pixel value of object pixel 165 between the pixel of the absolute value of the difference pixel value littler than the threshold value of regulation, weight coefficient [Wij] is value [1/2].For in the 2nd neighboring pixel 168, the pixel of pixel value that absolute value with difference is bigger than the threshold value of regulation, weight coefficient [Wij] is value [1/4].That is, even the pixel that comprises in the 2nd neighboring pixel 168, according to the pixel value tax with weight coefficient also can be different.And, the distance of distance object pixel 165 than the 2nd bigger neighboring pixel 168 of the 1st neighboring pixel 167 in, compose with littler weight coefficient.
At this, the threshold value of so-called regulation is the pixel value for the object pixel 165 of the value in the scope of value [0.0~1.0], the value of size such as value (20/256~60/256).
By the above weighted average that calculates, US is output as unsharp signal.
(visual handling part 163)
In visual handling part 163, use about identical identical input signal IS and the value of unsharp signal US, carry out visual processing.In this visual processing of carrying out, be that the contrast of input signal IS is strengthened or the processing of dynamic range compression etc.In contrast was strengthened, the signal with using after the function that the difference between input signal IS and the unsharp signal US or ratio are strengthened is strengthened with input signal IS addition, carried out the definition of image.In dynamic range compression, from input signal IS, deduct unsharp signal US.
Processing in the visual handling part 163 can be used input signal IS and unsharp signal US as input, and 2 dimension LUT of output signal OS output are carried out.
(visual processing method, program)
Above-mentioned processing, as visual handling procedure, can be by execution such as computers.Visual handling procedure is to be used to make computer to carry out the program of the visual processing method of the following stated.
Visual processing method possesses: the spatial manipulation step, by the brightness value of each pixel of the original image that is obtained as input signal IS, carry out spatial manipulation, and unsharp signal US is exported; With visual treatment step, use input signal IS and unsharp signal US about identical pixel, carry out the visual processing of original image, output signal OS is exported.
In the spatial manipulation step, press each pixel of input signal IS, carry out unsharp signal US being exported in the weighted average described in the explanation of spatial manipulation portion 162.About details, omit because of above-mentioned.
In visual treatment step, use input signal IS and unsharp signal US about identical pixel, carry out the visual processing described in the explanation of visual handling part 163, output signal OS is exported.About details, omit explanation because of above-mentioned.
(effect)
Use Figure 68 (a)~(b), the effect of the visual processing that realizes by visual processing unit 161 is described.Figure 68 (a) and Figure 68 (b), the processing that expression filter in the past carries out.Figure 68 (b), the processing that expression is undertaken by filter of the present invention.
Figure 68 (a) expression comprises the situation of the different object 171 of the concentration of neighboring pixel 166.In the spatial manipulation of object pixel 165, use the smoothing filter of filter factor with regulation.Therefore, originally not the influence that the object pixel 165 of the part of object 171 can be subjected to the concentration of object 171.
The situation of Figure 68 (b) expression spatial manipulation of the present invention.In spatial manipulation of the present invention, comprise the part 166a of object 171, do not comprise the 1st neighboring pixel 167 of object 171, the 2nd neighboring pixel 168 that does not comprise object 171, each of object pixel 165 for neighboring pixel 166, use different weight coefficients to carry out spatial manipulation.Therefore, the influence that object pixel 165 after the spatial manipulation is subjected to the extremely different pixel of concentration can be suppressed, more suitable spatial manipulation can be carried out.
And, in visual processing unit 161, do not need as the spy opens flat 10-75395 communique, to make a plurality of blurred signals.Therefore, but the circuit scale in the cutting device or handle load.
Further, in visual processing unit 161,, can the shape of the image of the filtering size of spatial filter and the reference of filter institute be changed adaptively substantially according to picture material.The spatial manipulation that therefore, can be fit to picture material.
(variation)
(1)
The size of above-mentioned neighboring pixel the 166, the 1st neighboring pixel the 167, the 2nd neighboring pixel 168 etc. is an example, also can be other size.
Above-mentioned weight coefficient is an example, also can be other value.For example, when the absolute value of the difference of pixel value surpasses defined threshold, weight coefficient can be composed with value [0].Like this, the object pixel after the spatial manipulation 165 can not be subjected to the influence of the extremely different pixel of concentration.This is being in the application of purpose with the contrast reinforcement, is called the contrast reinforcement of the part that contrast is bigger to a certain degree originally is excessive effect.
And weight coefficient is can be the value that is provided as such function as follows.
(1-a)
The absolute value of difference that also can be by making pixel value is the function of variable, gives the value of weight coefficient.Function, be relatively for example when the absolute value of the difference of pixel value hour, weight coefficient becomes big (approaching 1); When the absolute value of the difference of pixel value is big, the diminish dull function that reduces of absolute value of (approaching 0) difference such, pixel value of weight coefficient.
(1-b)
Can be the function of variable also, give the value of weight coefficient by making apart from the distance of object pixel 165.Function is relatively for example when apart from the close together of object pixel 165, and weight coefficient becomes (approaching 1) greatly; When the distance of distance object pixel 165 is far away, weight coefficient diminish (approaching 0) such, apart from the dull function that reduces of distance of object pixel 165.
Among above-mentioned (1-a), (1-b), can give weight coefficient more continuously.Therefore, compare, can give more suitable weight coefficient, suppress the contrasts of giving birth to more and strengthen, suppress the generation of blurred contour etc., can carry out the higher processing of visual effect with the situation of using threshold value.
(2)
About the processing of above-mentioned each pixel, the piece that comprises a plurality of pixels can be carried out as unit.Specifically, at first, calculate the object piece of the object become spatial manipulation average pixel value, with the average pixel value of the peripheral piece of the periphery of object piece.And then, use and above-mentioned same weight coefficient, each average pixel value is weighted on average.Like this, just further the average pixel value of object piece is carried out spatial manipulation.
Under these circumstances, but usage space handling part 162 as selecting signal leading-out portion 113 (with reference to Figure 49) or as selecting signal leading-out portion 123 (with reference to Figure 62).In this case, with (the 5th execution mode) (variation) (6) or (the 6th execution mode) (variation) (5) put down in writing identical.
About this, use Figure 69~Figure 71 to be illustrated.
(formation)
Figure 69 uses the block diagram of formation of the visual processing unit 961 of the illustrated processing of Figure 65~Figure 68 with the block unit that comprises a plurality of pixels for expression.
Visual processing unit 961, its formation comprises: image segmentation portion 964, it will become a plurality of image blocks as the image segmentation that input signal IS is transfused to; Spatial manipulation portion 962, it carries out the spatial manipulation of each image block of being cut apart; With visual handling part 963, the spatial manipulation signal US2 that it uses as the output of input signal IS and spatial manipulation portion 962 carries out visual processing.
Image segmentation portion 964, it will become a plurality of image blocks as the image segmentation that input signal IS is transfused to.And then the processing signals US1 that will comprise the characteristic parameter of each image block of being cut apart derives.Characteristic parameter is the parameter of the characteristics of image of each image block of for example having represented to be cut apart, for example mean value (simple average, weighted average etc.) or typical value (maximum, minimum value, median etc.).
Spatial manipulation portion 962, the processing signals US1 that acquisition comprises the characteristic parameter of each image block carries out spatial manipulation.
Use Figure 70, describe at the spatial manipulation of spatial manipulation portion 962.Figure 70 represents to be divided into the input signal IS of the image block that comprises a plurality of pixels.At this, each image block is divided into the zone of 9 pixels that comprise vertical 3 pixels, horizontal 3 pixels.In addition, this dividing method is an example, is not to be defined in such dividing method.And in order to give full play to visual treatment effect, preferably generating with sizable zone is the spatial manipulation signal US2 of object.
Spatial manipulation portion 962, according to processing signals US1, obtain to become spatial manipulation object object images piece 965 and be arranged in the characteristic parameter of each peripheral image block that the neighboring area 966 of the periphery of object images piece 965 comprises.
Neighboring area 966 is the zones that are positioned at the periphery of object images piece 965, is to be the zone of indulging 5 pieces, horizontal 5 pieces of center deployment with object images piece 965.In addition, the size of neighboring area 966 is not to be defined in this situation, can be littler, can be bigger yet.And neighboring area 966 is according to the distance of distance object images piece 966, from beginning to be divided into the 967, the 2nd neighboring area 968, the 1st neighboring area nearby.
Among Figure 70, the 1st neighboring area 967 is for object images piece 965 being vertical 3 pieces at center, the zone of horizontal 3 pieces.And then the 2nd neighboring area 968 is the zones that are positioned at the periphery of the 1st neighboring area 967.
Spatial manipulation portion 962 carries out the filter computing for the characteristic parameter of object images piece 965.
In the filter computing, object images piece 965 is weighted on average with the value of the characteristic parameter of the peripheral image block of neighboring area 966.In this average weighted power, based on the difference of the value of distance between object images piece 965 and the peripheral image block and characteristic parameter and determine.
More particularly, weighted average is calculated based on following formula F=(∑ [Wij] * [Aij]/∑ [Wij]).
At this, [Wij] is in object images piece 965 and neighboring pixel 966, is positioned at the weight coefficient of the image block correspondence of the capable j of i row, and [Aij] is in object images piece 965 and neighboring pixel 966, is positioned at the value of characteristic parameter of the image block of the capable j row of i.And " ∑ " is meant the calculating about the total of each image block of object images piece 965 and neighboring pixel 966.
Use Figure 71, [Wij] describes about weight coefficient.
Weight coefficient [Wij] is based on the value of distance between the neighboring pixel piece of object images piece 965 and neighboring pixel 966 and characteristic parameter and the value that determines.More particularly, the absolute value of the difference of the value of characteristic parameter is big more then composes with more little weight coefficient.Perhaps, big more then tax of distance with more little weight coefficient.
For example, for object images piece 965, weight coefficient [Wij] is value [1].
For in the 1st neighboring area 967, have and the value of the characteristic parameter of object images piece 965 between the peripheral image block of the absolute value of the difference characteristic ginseng value littler than the threshold value of regulation, weight coefficient [Wij] is value [1].For in the 1st neighboring area 967, the peripheral image block of characteristic ginseng value that absolute value with difference is bigger than the threshold value of regulation, weight coefficient [Wij] is value [1/2].That is, even the peripheral image block that comprises in the 1st neighboring area 967, according to the value tax of characteristic parameter with weight coefficient also can be different.
For in the 2nd neighboring area 968, have and the value of the characteristic parameter of object images piece 965 between the peripheral image block of value of the absolute value of the difference characteristic parameter littler than the threshold value of regulation, weight coefficient [Wij] is value [1/2].For in the 2nd neighboring area 968, the peripheral image block of the value of characteristic parameter that absolute value with difference is bigger than the threshold value of regulation, weight coefficient [Wij] is value [1/4].That is, even the peripheral image block that comprises in the 2nd neighboring area 968, according to the characteristic ginseng value tax with weight coefficient also can be different.And, the distance of distance object images piece 965 than the 2nd bigger neighboring area 968 of the 1st neighboring area 967 in, compose with littler weight coefficient.
At this, the threshold value of so-called regulation is the characteristic value for the object images piece 965 of the value in value [0.0~1.0] scope, the value of size such as value (20/256~60/256).
By the above weighted average that calculates, US2 is output as the spatial manipulation signal.
In visual handling part 963, carry out the visual processing same with visual handling part 163 (with reference to Figure 65).But, be with the difference of visual handling part 163, replace unsharp signal US, and use the object pixel that comprises the object that becomes visual processing interior object images piece, spatial manipulation signal US2.
And, the processing in visual handling part 963, the object images block unit unification that can comprise object pixel is handled, also can be with the order of the pixel that obtained according to input signal IS, US2 carries out hand-off process to the spatial manipulation signal.
Above processing is to carry out at all pixels that comprise among the input signal IS.
(effect)
In the processing of spatial manipulation portion 962, carrying out with the image block is the processing of unit.Therefore, the treating capacity of spatial manipulation portion 962 can be cut down, visual processing more at a high speed can be realized.And, hardware size is diminished.
(variation)
In above-mentioned, put down in writing and carried out handling with foursquare block unit.At this, the shape of piece can be arbitrary.
And above-mentioned weight coefficient, threshold value etc. all can suitablely change.
At this, the value of the part of weight coefficient can be value [0].In this case, with make neighboring area 966 to be shaped as arbitrary shape identical.
And, although understand in spatial manipulation portion 962, use the characteristic parameter of object images piece 965 and neighboring area 966, carry out spatial manipulation, but spatial manipulation, only also can being to use, the characteristic parameter of neighboring area 966 carries out.That is, in the average weighted power of spatial manipulation, power value of can be used as [0] of object images piece 965.
(3)
Processing in the visual handling part 163 is not limited to above-mentioned.For example, visual handling part 163, also can use value A, the unsharp signal US of input signal IS value B, dynamic range compression function F 4, strengthen function F 5, by the value C of following formula C=F4 (A) * F5 (A/B) institute computing down, as the value output of output signal OS.At this, dynamic range compression function F 4 is the dull functions that increase such as function that raise up.For example, be expressed as F4 (x)=x^ γ (0<γ<1).Strengthening function F 5, is power function, for example is expressed as F5 (x)=x^ α (0<α≤1).
In visual handling part 163, carry out under the situation of such processing,, then can compress the dynamic range of input signal IS on one side, Yi Bian strengthen local contrast if use the suitable unsharp signal US that is exported by spatial manipulation of the present invention portion 162.
On the other hand, US is improper at unsharp signal, under the fuzzy situation very little, though the reinforcement of unsuitable degree of comparing is arranged aspect edge strengthening.And, under fuzzy too much situation, though carried out the contrast reinforcement, the unsuitable compression of carrying out dynamic range.
(the 8th execution mode)
As the 8th execution mode of the present invention, at the application examples of the illustrated visual processing unit of above-mentioned the 4th~the 7th execution mode, visual processing method, visual handling procedure with use these system to describe.
Visual processing unit is built-in or be connected in the machine that for example computer, television set, digital camera, portable phone, PDA etc. handle image, carries out the device that the gray scale of image is handled, and realizes the integrated circuit as LSI etc.
In more detail, each functional module of above-mentioned execution mode can also can comprise part or all in the interior single chip of carrying out separately by single-chipization.In addition, at this, though,, be also referred to as IC, system LSI, ultra-large LSI, great scale LSI because of the difference of integrated level as LSI.
And the method for integrated circuit is not limited to LSI, also can realize by special circuit or common processor.After LSI makes, also can utilize programmable FPGA (Field PragrammableGate Array, field programmable gate array) or be reconfigurable into the connection of circuit unit of LSI inside or the device able to programme and the processor of setting.
And then, if the technology of the integrated circuit of other technological displacement LSI of use development of semiconductor or derivation is come on stage, then also can use this technology to carry out the integrated of functional module certainly.Also can be useful on the application of biotechnology etc.
The processing of each module of Figure 44, Figure 49, Figure 62, Figure 65, Figure 69 is to be undertaken by the central processing unit (CPU) that for example visual processing unit possesses.And the program that is used to carry out each processing is kept in the storage devices such as hard disk, ROM, in ROM, and perhaps read routine and execution in RAM.In addition, 2 of institute's reference dimension LUT are stored in the storage devices such as hard disk, ROM reference as required in the gray scale processing execution portion 114,125 of Figure 49, Figure 62.And then, 2 dimension LUT, can by directly is connected with visual processing unit or provide via 2 generators of tieing up LUT of network brief introduction connection.In addition, tie up LUT too about 1 of reference in the gray scale processing execution portion 144 of Figure 56.
In addition, visual processing unit also can be to be built in or to be connected in the machine that dynamic image is handled, the device that the gray scale of the image of every frame (each section) is handled.
In addition, in each visual processing unit, carry out by the illustrated visual processing method of above-mentioned the 4th~the 7th execution mode.
Visual handling procedure built-in or be connected in the machine that computer, television set, digital camera, portable phone, PDA etc. handle image, is stored in the storage device of hard disk, ROM etc. the program that the gray scale of carries out image is handled.Recording medium via for example CD-ROM etc. perhaps is provided via network.
In the above-described embodiment, the processing that the brightness value about each pixel carries out has been described.At this, the present invention is not the color space that depends on input signal IS.Promptly, processing in the above-described embodiment, at input signal IS is under the situation of expressions such as the YCbCr color space, yuv color space, the Lab color space, the Luv color space, the YIQ color space, the XYZ color space, the YPbPr color space, the RGB color space, applicable equally for brightness, the brightness of each color space.
In addition, when input signal IS was represented by the RGB color space, processing in the above-described embodiment also can be carried out separately for each composition of RGB.
(the 9th execution mode)
As the 9th execution mode of the present invention, use Figure 72~Figure 75, to the application examples of above-mentioned illustrated visual processing unit, visual processing method, visual handling procedure with use these system to describe.
Figure 72 realizes the block diagram that the integral body of the content provider system ex100 of content transmission service constitutes for expression.With the zone that provides of communication service, be divided into the size of hope, in each unit, be provided as the base station ex107~ex110 of fixed wireless platform respectively.
This content provider system ex100, for example, with the Internet ex101 via the ex102 of ISP and telephone network ex104 and base station ex107~ex110, be connected with computer ex111, PDA (personal digital assistant) ex112, camera ex113, portable phone ex114, have each machine such as portable phone ex115 of camera.
But content (contents) feed system ex100 is not limited to the such combination of Figure 72, can be arbitrary combination and connecting.And, also can be not via base station ex107~ex110 as the fixed wireless platform, and allow each machine directly be connected with telephone network ex104.
Camera ex113 is that digital video shooting etc. can be carried out the machine that dynamic image is taken.And, portable phone is that the portable telephone or the PHS (Personal Handyphone System) of PDC (Personal Digital Commuication) mode, CDMA (CodeDivision Multiple Access) mode, W-CDMA (Wideband-Code DivisionMultiple Access) mode or GSM (Global System for Mobile Communication) mode waits any one.
In addition, streaming server (streaming server) ex103 connects via base station ex109, telephone network ex104 from camera ex113, uses camera ex113, and the data after the encoding process that can carry out sending based on the user are carried out joins letter etc. in real time.The encoding process of captured data promptly can be carried out by camera ex113, and also the server that can be handled by the transmission of carrying out data etc. carries out.In addition, by the captured dynamic image data of camera ex116, can be sent to streaming server 103 via computer ex111.Digital camera ex116 is the machine that digital camera etc. can be taken still image, dynamic image.In this case, the coding of dynamic image can be undertaken or also can be undertaken by computer ex111 by camera ex116.In addition, encoding process is the processing of carrying out in the LSIex117 that computer ex111 or camera ex116 are had.In addition, can be with the software of image encoding with decoding usefulness, embed as in certain storage medium (CD-ROM, floppy disk, hard disk etc.) that can read by computer ex111 etc.Further, can send dynamic data by the portable phone ex115 that has camera.At this moment dynamic data is to carry out data after the encoding process by the LSI that portable phone ex115 is had.
In this content provider system ex100, the user carries out encoding process to the content (for example image that music is taken in real time etc.) of being taken by camera ex113, camera ex116 etc., send to streaming server ex103, on the other hand, streaming server ex103, for the client that request is arranged, the foregoing data are carried out flow join letter.As client, have can decode to the data after the encoding process, computer ex111, PDAex112, camera ex113, portable phone ex114 etc.By according to like this, content provider system ex100 can receive the data that are encoded in client, and regenerate, so in client by receiving in real time and decoding, regeneration, thereby can also realize independent broadcast.
When displaying contents, can use the illustrated visual processing unit of above-mentioned execution mode, visual processing method, visual handling procedure.For example, computer ex111, PDAex112, camera ex113, portable phone ex114 etc. possess the described visual processing unit of above-mentioned execution mode, can realize visual processing method, visual handling procedure.
In addition, streaming server ex103 can be for visual processing unit, and ex101 provides a description file data via network.And then there are many in streaming server ex103, and different description document data can be provided respectively.In addition, streaming server ex103 can be described the making of file data.Like this, at visual processing unit, can obtain under the description document data conditions via network ex101, visual processing unit does not need to store in advance the description document data of using in the visual processing, the memory capacity that can cut down visual processing unit.In addition, owing to can obtain the description document data, therefore can realize different visual processing from the multiple servers that connects via the Internet ex101.Describe at portable phone as an example.
Figure 73 represents to possess the figure of portable phone ex115 of the visual processing unit of above-mentioned execution mode.Portable phone ex115, have: main part, its by be used for and base station ex110 between receive and dispatch electric wave antenna ex201, CCD camera etc. can absorb the camera section ex203 of image, still image and show and will constitute by the display part ex202 and the operation keys ex204 group of the LCD of the data behind the captured image of camera section ex203, the image-decoding that antenna ex201 received etc.; Be used for the loud speaker etc. of output sound audio output unit ex208, be used for the Mike etc. of sound input sound input part ex205, be used for recording medium ex207, the sliding part ex206 that is used for can loading and unloading recording medium ex207 that data, the data of dynamic image or the data of still image etc. of the data to captured dynamic image or still image, the mail that received, the data after being encoded or decoded data are preserved at portable phone ex115.Recording medium ex207, preservation is as EEPROM (Electrically Erasable and Programmbale Read Only Memory, the read-only memory of electrically erasable) a kind of flash element, be with plastic casing such as SD card in the non-volatile memory can electricity rewriting or wipe.
And then, use Figure 74 to describe about portable phone ex115.Portable phone ex115, possesses the master control part ex311 that the mode of each one of the main part of display part ex202 and operation keys ex204 forms according to unified control, power circuit ex310, operation input control part ex304, the ex312 of image encoding portion, the ex303 of camera interface portion, LCD (Liquid Crystal Display, LCD) control part ex302, the ex309 of picture decoding portion, multiple separated part ex308, the ex307 of storing, regenerating portion, department of modulation and demodulation ex306 and the ex305 of acoustic processing portion interconnect via synchronous bus ex313.
Power circuit part ex310, in case the operation by the user make and finish conversation and power key becomes conducting state, then by from battery unit for each supply of electrical energy, become movable state thereby start the digital cell phone ex115 that has camera.
Portable phone ex115, control based on the master control part ex311 that forms by CPU, ROM and RAM etc., will be by the voice signal of sound input part ex205 collection when the sound call mode by the ex305 of acoustic processing portion, be for conversion into digital audio signal, it is carried out spread spectrum by the ex306 of modulation-demodulation circuit portion handles, imposed after digital to analog conversion processing and the frequency conversion process by the ex301 of receiving and transmitting signal circuit part, ex201 sends via antenna.Portable phone ex115 in addition, when the sound call mode, the received signal that is received by antenna ex201 is amplified, imposing frequency conversion process and analog to digital conversion handles, imposing the frequency spectrum counter diffusion by the ex306 of modulation-demodulation circuit portion handles, be transformed into after the analoging sound signal by the ex305 of acoustic processing portion, it is exported via audio output unit ex208.
And then, when data communication mode under the situation of send Email, the text data of the Email that the operation of the operation keys ex204 by main part is imported, ex304 is sent to master control part ex311 via operation input control part.Master control part ex311 carries out spread spectrum by the ex306 of modulation-demodulation circuit portion to text data and handles, and imposes after digital to analog conversion processing and the frequency conversion process by the ex301 of receiving and transmitting signal circuit part, sends to base station ex110 via antenna ex201.
When data communication mode, send under the situation of view data, will supply with to the ex312 of image encoding portion via the ex303 of camera interface portion by the captured view data of camera section ex203.And, not sending under the situation of view data, the view data that will be made a video recording by camera section ex203 via ex303 of camera interface portion and LCD control part ex302, also can directly be presented on the display part ex202.
The ex312 of image encoding portion by the view data of supplying with from camera section ex203 is carried out compressed encoding, thereby is transformed into coded image data, sends it to multiple separated part ex308.And, at this moment simultaneously, portable phone ex115, with camera section ex203 when the shooting by the collected sound of sound input part ex205, send to multiple separated part ex308 via audio output unit ex305 as digital audio data.
Multiple separated part ex308, will be from the coded image data of the ex312 of image encoding portion supply and the voice data of supplying with from the ex305 of acoustic processing portion, mode in accordance with regulations carry out multipleization, its result, with resulting multipleization data, carry out spread spectrum by the ex306 of modulation-demodulation circuit portion and handle, impose after digital to analog conversion processing and the frequency conversion process, send via antenna ex201 by the ex301 of receiving and transmitting signal circuit part.
When data communication mode, when receiving the data of the dynamic image file that links with homepage etc., will be via the received signal of antenna ex201 from base station ex110 reception, carry out the contrary extension process of frequency spectrum by department of modulation and demodulation ex306, its result sends to multiple separated part ex308 with resulting multiple data.
And, for the multipleization data that received via antenna ex201 are decoded, multiple separated part ex308, thereby by multipleization data being separated the coding stream that is divided into view data and the coding stream of voice data, via synchronous bus ex313 this coded image data is supplied with to the ex309 of picture decoding portion, simultaneously this voice data is supplied with to audio output unit ex305.
Then, the ex309 of picture decoding portion, decode by decoded bit stream to view data, thereby generate the regeneration dynamic image data, it is supplied with to display part ex202 via LCD control part ex302, like this, just for example show the dynamic data that comprises in the dynamic image file that links with homepage, at this moment simultaneously, the ex305 of acoustic processing portion is transformed into voice data after the analoging sound signal, and it is supplied with to audio output unit ex208, like this, just the audio data reproducing that comprises in the dynamic image file that will be for example links with homepage.
In above formation, the ex309 of picture decoding portion also can possess the visual processing unit of above-mentioned execution mode.
In addition, be not limited to the example of said system, recently, ripple carries out digital broadcast via satellite, on the ground becomes topic, digital broadcast shown in Figure 75 also can embed the illustrated visual processing unit of above-mentioned execution mode, visual processing method, visual handling procedure with in the system.Specifically, in the ex409 of broadcasting station, the coding stream of image information is transmitted to communication or broadcasting satellite ex410 via electric wave.Receive its broadcasting satellite ex410, send the electric wave of broadcasting usefulness, this electric wave is received by the antenna ex406 of family that possesses the satellite broadcasting transceiver, by the device of television set (receiving equipment) ex401 or set-top box (ST B) ex407 etc., decoded bit stream is decoded, with its regeneration.At this, the device of television set (receiving equipment) ex401 or set-top box (ST B) ex407 etc. also can possess the illustrated visual processing unit of above-mentioned execution mode.In addition, also can use the visual processing method of above-mentioned execution mode.Further, also can possess visual handling procedure.In addition, read in the coding stream that writes down as among the storage medium ex402 such as the CD of recording medium or DVD, in its regenerating unit ex403 that decodes, also can carry out the illustrated visual processing unit of above-mentioned execution mode, visual processing method, visual handling procedure.In this case, the signal of video signal of being regenerated is presented on the monitor ex404.And, also can consider illustrated visual processing unit, visual processing method, the visual handling procedure of the above-mentioned execution mode of installation in the set-top box ex407 that the antenna ex406 of the optical cable ex405 that uses with the optical cable television machine or satellite/ground ripple broadcasting is connected, with the monitor ex408 of television set to its formation of regenerating.At this moment set-top box not only also can be embedded in the illustrated visual processing unit of above-mentioned execution mode at television set.And automobile ex412 that also can be by having antenna ex411 regenerates to dynamic image in the display unit such as auto-navigation system ex413 that automobile ex412 has from received signals such as satellite ex410 or base station ex107.
And then, also can encode, and be recorded in the recording medium picture signal.As concrete example, the DVD burner of recording image signal in DVD CD ex421 is arranged or be recorded in register ex420 such as dish register in the hard disk.Also have, also can be recorded among the SD card ex422.If disc machine ex420 possesses the decoding device of above-mentioned execution mode, then the picture signal that writes down is carried out interpolation regeneration in DVD dish ex421 or SD card ex422, and be presented on the monitor ex408.
In addition, the formation of auto-navigation system ex413, for example consider in the formation shown in Figure 74, the formation except that camera section ex203 and the ex303 of camera interface portion, the ex312 of image encoding portion, equally also can consider computer ex111 or television set (receiver) ex401 etc.
And terminals such as above-mentioned portable terminal ex114 except that the transmitting-receiving type terminal with encoder and decoder both sides, also can be considered 3 groups the installation form that only has the transmission terminal of encoder, only has the receiving terminal of decoder.
Like this, illustrated visual processing unit, visual processing method, the visual handling procedure of above-mentioned execution mode can be used for accessing the illustrated effect of above-mentioned execution mode in above-mentioned any machine and the system.
(the 10th execution mode)
Use Figure 76~Figure 94, describe at display unit 720 as the 10th execution mode of the present invention.
Display unit 720 shown in Figure 76 is display unit of display images such as PDP, LCD, CRT, projecting apparatus.Display unit 720 has and is characterised in that, has the image processing apparatus 723 that comprises the illustrated visual processing unit of above-mentioned execution mode and by switching this point of the employed description document data of visual processing automatically or manually.In addition, display unit 720 can be an independent device, also can be the devices that portable information terminal possessed such as portable telephone, PDA, PC.
(display unit 720)
Display unit 720 possesses: display part 721, drive control part 722, image processing apparatus 723, CPU724, input part 725, tuner 726, antenna 727, coder 728, storage control 729, memory 730, external interface (I/F) 731 and external device (ED) 740.
Display part 721 is display devices that the image information d360 that reads from drive control part 722 is shown.Drive control part 722 is by the control from CPU724, reads on display part 721 from the output image signal d361 of image processing apparatus 723 outputs, simultaneously display part 721 is carried out device driven.More particularly, drive control part 722 by the control from CPU724, offers display part 721 with the value correspondent voltage value of output image signal d361, makes it display image.
Image processing apparatus 723, be the control that receives from CP724, carry out the image processing of the input image data d372 (with reference to Figure 77) that comprises among the received image signal d362, will comprise the device of the output image signal d361 output of output image data d371 (with reference to Figure 77).Image processing apparatus 723 comprises the illustrated visual processing unit of above-mentioned execution mode, has to be characterised in that, uses the description document data to carry out image processing.About details, the back description document.
CPU724 is the relevant computing of data processing that is used to carry out each one of display unit 720, the device that carries out the control of each one simultaneously.Input part 725 is to be used to make the user to carry out the user interface that display unit 720 is operated, and is made of the key that is used for each one is controlled, button, remote controller etc.
Tuner 726 to carrying out demodulation via wireless or wired signal that receives, and is exported as numerical data.Specifically, tuner 726 via antenna 727 or cable (not shown), receives ripple (digital-to-analog) broadcasting on the ground, BS (digital-to-analog) and CS broadcasting etc.Coder 728 carries out the demodulation by the numerical data of 726 demodulation of tuner, and will be to the received image signal d362 output of image processing apparatus 723 inputs.
The operation that storage control 729 carries out the CPU that is made of DRAM etc. is with the address of memory 730 or visit control constantly etc.
Exterior I/F731 is to be used for obtaining view data or description document information etc. from external device (ED)s such as storage card 733, PC735 740, and as the interface of received image signal d362 output.So-called description document information is the relevant information of description document data that is used to carry out image processing.The details aftermentioned.Exterior I/F731 is made of for example storage card I/F732, PCI/F734, network I/F736, wireless I/F737 etc.In addition, exterior I/F731 does not need to possess at this illustrative all devices.
Storage card I/F732 is the interface that the storage card 733 that is used for having write down view data or description document data message etc. is connected with display unit 720.PCI/F734 is used for interface that the PC735 as the external mechanical such as personal computer that write down view data or description document information etc. is connected with display unit 720.Network I/F736 is used for display unit 720 is connected with network, obtains the interface of view data or description document information etc.Wireless I/F737 is connected display unit 720 via WLAN etc. with external mechanical, obtain the interface of view data or description document information etc.In addition, exterior I/F731 is not limited to diagram, for example, can be USB also, be used for interface that light etc. is connected with display unit 720.
Via view data or the description document information that exterior I/F731 obtained, after by coder 728 decodings, be transfused in the image processing apparatus 723 as required as received image signal d362.
(image processing apparatus 723)
(1) formation of image processing apparatus 723
Use Figure 77, describe at the formation of image processing apparatus 723.Image processing apparatus 723 is that the input image data d372 that is contained among the received image signal d362 is carried out visual processing and look processing, will comprise the device of the output image signal d361 output of output image data d371.At this, input image data d372 and output image data d371 are the view data with RGB composition, and input image data d372 is with (IR, IG, the IB) composition as the RGB color space; Output image data d371 is with (OtR, OtG, the OtB) composition as the RGB color space.
Image processing apparatus 723 possesses: colorful visual processing unit 745, and it carries out colorful visual for input image data d372 and handles; Look processing unit 746 carries out look for the colorful visual processing signals d373 as the output of colorful visual processing unit 745 and handles; With description document information output part 747, it will be used for description document information SSI, SCI that the description document data that colorful visual is handled and the look processing is used are determined are exported.At this, colorful visual processing signals d373 is the view data with RGB composition, with (OR, OG, the OB) composition as the RGB color space.
Below, the order with description document information output part 747, colorful visual processing unit 745, look processing unit 746 illustrates detailed formation.
(2) description document information output part 747 and description document information SSI, SCI
(2-1) summary of description document information output part 747
Use Figure 78, describe at description document information output part 747 with description document information SSI, SCI output.
Description document information output part 747 is devices (with reference to Figure 77) of exporting description document information SSI, SCI in colorful visual processing unit 745 and look processing unit 746 respectively, is made of environment measuring portion 749, information input unit 748, output control part 750.Environment measuring portion 749 detects automatically at least a portion of environmental information described later, and exports as detection signal Sd1.Information input unit 748 obtains detection signal Sd1, makes the user import the environmental information environmental information in addition that detection signal Sd1 comprises, and exports as input signal Sd2.Output control part 750 obtains detection information Sd1 and input information Sd2, to colorful visual processing unit 745 and look processing unit 746 output description document information SSI, SCI.
Before the detailed description of each one of carrying out, at first describe at description document information SSI, SCI.
(2-1) description document information SSI, SCI
Description document information SSI, SCI are to be used for information that the description document data that colorful visual processing unit 745 and look processing unit 746 uses are determined.Specifically, description document information SSI, SIC, the parameter information of the feature of the processing of the label informations such as number that comprise the description document data, the description document data are determined, expression description document data, display part 721 (with reference to Figure 76) but display environment or apparent display part 721 in the relevant environmental information of the visual environment of the image that shows at least one.
So-called description document data are the employed data of image processing in colorful visual processing unit 745 or the look processing unit 746, be preserve processed view data correspondence conversion coefficient the coefficient matrix data or the processing of processed view data correspondence is provided after the list data (for example 2 dimension LUT etc.) etc. of view data.
So-called label information is to be used for identifying information that description document data and other description document data are discerned, for example gives each number that is distributed of a plurality of description document data of logining in colorful visual processing unit 745 and look processing unit 746 etc.
So-called parameter information is the information of feature of the processing of expression description document data, the information after the degree of treatment that is that contrast intensive treatment, the dynamic range compression that for example the description document data is realized handled, look becomes conversion process etc. quantizes.
So-called environmental information, be what to show by the view data after the image processing, information with visual environmental correclation, for example, in the information of user-dependent user profile such as the product information of the such surround lighting information of the brightness of the surround lighting that the place is set of display unit 720 or color temperature, display part 721 (for example identification symbol etc.), image size information that display part 721 shows, the shown image positional information relevant, user's age and sex etc. with distance between the user that can see image.
In addition, below, the situation that comprises label information at description document information SSI, SCI describes.
(2-2) environment measuring portion 749
Environment measuring portion 749 is to use transducer etc. to carry out the device of the detection of environmental information.Environment measuring portion 749 is the optical sensors that for example carry out the detection of the shading value of surround lighting or color temperature, or by wireless or wired device that reads in the display part 721 product information of installing (reading device of radio mark for example, the reading device of bar code, from the database of information of each one that management display unit 720 possesses, read the device of information etc.), the transducers such as wireless or infrared ray of the distance between measurement and the user, or the devices such as camera of acquisition and user-dependent information.
(2-3) information input unit 748
Information input unit 748 is the input units that are used for user's input environment information, and the environmental information of being imported is exported as input information Sd2.Information input unit 748 for example both can constitute by switch and to the circuit that carries out perception from the input of switch etc., also can be made of display part 721 or the software that the user interface of the input usefulness that self shows in information input unit 748 is operated.In addition, information input unit 748 both can be built in the display unit 720, also can be via the device with the information input such as network.
In information input unit 748, with the environmental information input beyond the environmental information that comprises among the information of the detection Sd1.For example in information input unit 748,, control the environmental information that the user can import according to the environmental information that comprises among the detection signal Sd1.
In addition, information input unit 748 is no matter how the information of detection Sd1 can make all environmental informations be transfused to.In this case, information input unit 748 both can not obtain to detect information Sd1, on one side also can obtain to detect information Sd, Yi Bian import more detailed information by the user.
(2-4) output control part 750
Output control part 750 obtains detection information Sd1 and input information Sd2, and description document information SS1, SC1 are exported.Specifically, output control part 750 according to according to detecting the environmental information that information Sd1 and input signal Sd2 are obtained, is selected suitable description document data, with this label information output.More particularly, output control part 750, by with reference to the database that is associated between each value of the candidate of selected description document data and environmental information, thereby for the suitable description document data of environmental information selection that obtained.
Further describe with being associated of description document data about environmental information.
For example, when the lightness of the surround lighting of display unit 720 is higher, preferably carry out the visual processing that local contrast is strengthened.Therefore, in output control part 750, the label information output of the description document data that will more strengthen the contrast of part.
And for example when the distance between user and the display unit 720 was far away, the visual angle of the image that shows in display part 721 diminished, and image looks and diminishes.If varying in size of visual angle then felt the shading value difference of image.Therefore, in output control part 750, will make the label information output of the description document data of gray scale, contrast change based on the size at visual angle.In addition, varying in size of the display part 721 of display unit 720 also is that produce to the size at this visual angle should first regulation key element.
And then, action one example of output control part 750 is described.
In the mankind's vision, if shown image size becomes big, then tend to feel brighter, preferably feel and suppressed a side after improve in dark portion zone.Consider this point, for example according to the environmental information that is obtained, judge when display part 721 shown image sizes become big, for colorful visual processing unit 745, to suppress the dark portion zone in the whole zone of image improves, and make that local contrast improve to increase, the label information of the description document data of such processing, export as description document information SSI.And then, for look processing unit 746, will carry out the label information with the description document data of the corresponding look processing of other environmental information with description document information SSI, export as description document information SCI.This what is called " the corresponding look of description document information SSI and other environmental information is handled ", be for example to use according to the specified description document data of description document information SSI to carry out image after the visual processing, suitable look processing of carrying out color reproduction etc. under the influence of environment.
In addition, in output control part 750, when repeating to have obtained environmental information, can preferably use the either party among detection signal Sd1 and the input signal Sd2 by detection signal Sd1 and input information Sd2.
(3) the colorful visual processing unit 745
(3-1) formation of colorful visual processing unit 745
Use Figure 79.Describe at the formation of using visual processing unit 745.Colorful visual is handled and is stood erectly 745, be characterised in that, possesses the visual processing unit 753 that to carry out the illustrated visual processing of above-mentioned execution mode, carry out visual processing at brightness composition for input image data d372, possess color control portion 752, will extend to till the colour content for the visual processing that the brightness composition carries out.
Colorful visual processing unit 745 possesses: the 1st color space transformation portion 751, visual processing unit 753, color control portion 752 and the 2nd color space transformation portion 754.
The 1st color space transformation portion 751, the input image data d372 with the RGB color space is transformed into brightness composition and colour content.For example, the 1st color space transformation portion 751 with the input image data d372 of the RGB color space, is transformed into the signal of the YCbCr color space.The signal of the brightness composition after the conversion is as input signal IS, with the signal of colour content as chrominance signal ICb, ICr.
Visual processing unit 753 is visual processing of input signal IS of carrying out the brightness composition of input image data d372, with the device of output signal OS output.And to visual processing unit 753, input description document information SSI uses the visual processing of passing through the specified description document data of the description document information SSI that imported from description document information output part 747 (with reference to Figure 77).Details aftermentioned about visual processing unit 753.
To color control portion 752, input chrominance signal ICb, ICr, input signal IS and output signal OS will be as correcting colour signal OCb, the OCr outputs of the chrominance signal after being corrected.For example, in color control portion 752, use the correction of the ratio between input signal IS and the output signal OS.More particularly, the ratio of the signal value of output signal OS that will be corresponding with the signal value of input signal IS is with the signal value of chrominance signal ICb, the ICr back value that multiplies each other, respectively as the value of correcting colour signal correction chrominance signal OCb, OCr.
The 2nd color space transformation portion 754 will be transformed into the colorful visual processing signals d373 of the RGB color space as output signal OS, correcting colour signal OCb, the OCr of the signal of the YCbCr color space.
(3-2) formation of visual processing unit 753
As visual processing unit 753, use and the illustrated same visual processing unit of visual processing unit 1 (with reference to Fig. 1) of above-mentioned execution mode.
Use Figure 80, describe at the formation of visual processing unit 753.
Visual processing unit 753 shown in Figure 80 is the visual processing unit that have with the same formation of visual processing unit shown in Figure 11.About realizing and visual processing unit 1 part of said function almost, additional identical symbol.Visual processing unit 753 shown in Figure 80 is with the difference of as shown in Figure 1 visual processing unit 1, description document data entry device 8, and login is by the determined description document data of description document data message SSI that obtained in 2 dimension LUT4.The explanation of other each one, since same with above-mentioned execution mode, explanation therefore omitted.
Visual processing unit 753 shown in Figure 80 uses the description document data of logining in 2 dimension LUT4, carry out the visual processing of input signal IS, and output signal OS is exported.
(4) the look processing unit 746
Look processing unit 746 uses and to pass through the determined description document data of description document information SCI that obtained, carries out handling as the look of the colorful visual processing signals d373 of the output of colorful visual processing unit 745.Look processing unit 746 employed description document data, be for example for the composition (OR, OG, OB) of colorful visual processing signals d373,3 the 3 dimension question blanks of composition (OtR, OtG, OtB) of output image data d371 or the transform coefficient matrix data of 3 row, 3 row are provided.
(effect of display unit 720)
(1)
In display unit 720, can use the image processing of the description document data that are suitable for the environmental information that obtained.Especially, because automatic detected environmental information not only, also can be described the selection of file data, therefore can carry out the higher image processing of visual effect by the user based on the environmental information of user's input.
Under the situation of use,, therefore can realize the high-speed image processing owing to can carry out image processing by the reference of form as the question blank of description document data.
In display unit 720, by the description document data are changed, thereby realize different image processing.That is, do not change hardware and constitute the different image processing of change realization.
In the image processing of using the description document data,, therefore can realize complex image processing easily owing to can generate description document Shanghai opera in advance.
(2)
In description document information output part 747, for each of colorful visual processing unit 745 and look processing unit 746, exportable different description document information.Therefore, can prevent the processing of each image processing repetition in colorful visual processing unit 745 and the look processing unit 746 or the processing that effect offsets.That is, can carry out image processing suitably by image processing apparatus 723.
(variation)
(1)
In the above-described embodiment, though put down in writing input image data d372, output image data d371, colorful visual processing signals d373, be the signal of the RGB color space, also can be the signal of other the color space.For example, each signal can be the signal of expressions such as the YCbCr color space, yuv color space, the Lab color space, the Luv color space, the YIQ color space, the XYZ color space, the YPbPr color space, the RGB color space.
And about signal processed in the 751, the 2nd color space transformation portion 754 of the 1st color space transformation portion too, it is described to be not limited to execution mode.
(2)
In the above-described embodiment, the situation that comprises label information about description document information SSI, SCI is described.At this, comprise the action of each one of the image processing apparatus 723 under the situation of out of Memory (description document data, parameter information, environmental information etc.) at description document information SSI, SCI, describe.
(2-1)
As description document information SSI, when SCI comprises the description document data, output control part 750 is login storage description document devices data or that can generate the description document data, according to detection information Sd1 that is obtained and input signal Sd2, the description document data that judgement is used in colorful visual processing unit 745 and look processing unit 746, and output respectively.
In colorful visual processing unit 745 and look processing unit 746, use the description document data that obtained to carry out image processing.For example, the description document data entry device 8 of visual processing unit 753 (with reference to Figure 80), the description document data that comprise among the login description document information SSI in 2 dimension LUT4 are carried out visual processing.In addition, in this case, visual processing unit 753 also can not possess description document data of description entering device 8.
In this image processing apparatus 723,, therefore can determine employed description document data definitely because it is exported to colorful visual processing unit 745 and look processing unit 746 from description document information output part 747 with the description document data.And then, can cut down the memory capacity that is used in the description document data of colorful visual processing unit 745 and look processing unit 746.
(2-2)
As description document information SSI, when SCI comprises parameter information, output control part 750 is to have the device that is used for according to detecting information Sd1 and input signal Sd2 output parameter database of information etc.This database, to the value of environmental information, and in the environment of this value of expression the relation between the suitable image processing store.
In colorful visual processing unit 745 and look processing unit 746, select to realize the description document data of the image processing close with the value of the parameter information that is obtained, carry out image processing.For example, when the description document data entry device 8 of visual processing unit 753 (with reference to Figure 80), use the parameter information that is contained among the description document information SSI, select the description document data, 2 the dimension LUT4 in the login selected description document data, carry out visual processing,
In this image processing apparatus 723, can cut down the data volume of description document information SSI, SCI.
(2-3)
As description document information SSI, when SCI comprises environmental information, output control part 750 is to detect information Sd1 and the input signal Sd2 device as the output of description document information.At this, output control part 750 also can as description document information SSI, SCI output, also optionally be divided into description document information SSI and description document information SCI and output with according to detecting all environmental informations that information Sd1 and input signal Sd2 are obtained.
In colorful visual processing unit 745 and look processing unit 746, select suitable description document data according to environmental information, carry out image processing.For example, the description document data entry device 8 of visual processing unit 753 (with reference to Figure 80), database that is associated between each value by the environmental information that comprises among the reference description document information SSI and the candidate of selected description document data etc., thereby for the environmental information that is obtained, select suitable description document data, the selected description document data of login in 2 dimension LUT4 are carried out visual processing.
Under with the situation of all environmental informations, can cut down the processing in the output control part 750 as description document information SSI, SCI output.Under the situation that environmental information is optionally exported as description document information SSI, SCI, owing to can consider in colorful visual processing unit 745 and look processing unit 746 processing of each, therefore can prevent to realize the image processing of the effect that repeats or the image processing of the effect that realizes offseting.And then, in colorful visual processing unit 745 and look processing unit 746, because therefore the selected environmental information of suitable acquisition only can carry out the selection of more definite and simple description document data.
(2-4)
Description document information SSI, SCI as long as comprise in description document data, label information, parameter information, the environmental information at least one, also can comprise respectively simultaneously.
And, so-called description document information SSI and description document information SCI, be not certain needs be different information, also can be identical information.
(3)
Visual processing unit 753, be to comprise (the 1st execution mode) (variation) (7) described description document data entry device 701 (with reference to Fig. 9) at interior device, also can be according to the synthetic degree that uses the description document selected description document data of information SSI and obtained, generate the device of new description document data according to description document information SSI.
Use Figure 81, be illustrated about action as the visual processing unit 753 of variation.
In visual processing unit 753 as variation, use based on the selected description document data of description document information SSI in the description document data of login in description document data entry portion 702, generate new description document data.
Description document data entry portion 702, the label information that comprises based on description document information SSI etc. are selected description document data 761 and description document data 762.At this, description document data 761, be to be used to carry out dark portion to improve the description document data of handling, be selected description document data such as when surround lighting is more weak, description document data 762 are to be used to carry out local contrast to improve the description document data of handling, and are than selected description document data such as Qiang Shi at surround lighting.
Description document generating unit 704, environment light intensity in the environmental information that acquisition description document information SSI comprises, according to description document data 761 and description document data 762, generate the description document data that are used for carrying out suitable image processing at this environment light intensity.More particularly, the value of the environment light intensity that environment for use information comprises is carried out interior branch to the value of description document data 761 and description document data 762.
According to more than, as the visual processing unit 753 of variation, can generate new description document data, carry out visual processing.In visual processing unit 753,, also can generate the description document data and realize many different visual processing even without logining many description document data in advance as variation.
(4)
Visual processing unit 753 is not to be defined in shown in Figure 80.For example, also can be in the illustrated visual processing unit 520 (with reference to Fig. 6) of above-mentioned execution mode, visual processing unit 525 (with reference to Fig. 7), the visual processing unit 530 (with reference to Fig. 8) any.
Use Figure 82~Figure 84, claim to describe at each structure.
(4-1)
Use Figure 82, describe at the formation of visual processing unit 753a.
Visual processing unit 753a shown in Figure 82 is the visual processing unit that has with the same formation of visual processing unit 520 as shown in Figure 6.About realizing the part with the same function of visual processing unit 520, additional identical symbol.Visual processing unit 753a shown in Figure 82 and the difference between the visual processing unit 520 as shown in Figure 6 are, description document data entry portion 521, in 2 dimension LUT4, login is based on the description document information SSI that is obtained with from the determined description document data of the result of determination SA of spectral discrimination portion 522.The explanation of other each one is because the same so omission explanation with above-mentioned execution mode.
In this visual processing unit 753a, because description document information SSI not only based on the selection that result of determination SA also can be described file data, therefore can carry out more suitable visual processing.
(4-2)
Use Figure 83, describe at the formation of visual processing unit 753b.
Visual processing unit 753b shown in Figure 83 is the visual processing unit that has with the same formation of visual place device shown in Figure 7 525.About realizing the part with the same function of visual processing unit 525, additional identical symbol.Visual processing unit 753b shown in Figure 83, and visual processing unit 525 as shown in Figure 7 between difference be, description document data entry portion 526, in 2 dimension LUT4, login is based on the description document information SSI that is obtained with from the determined description document data of the input results SB of input unit 527.The explanation of other each one is because so omission identical with above-mentioned execution mode.
In this visual processing unit 753b,, therefore can carry out more suitable visual processing because description document information SSI not only based on input results SB, also can be described the selection of file data.
(4-3)
Use Figure 84, describe at the formation of visual processing unit 753c.
Visual processing unit 753c shown in Figure 84 is the visual processing unit that has with the same formation of visual processing unit shown in Figure 8 530.About realizing the part with the same function of visual processing unit 530, additional identical symbol.Visual processing unit 753c shown in Figure 84, and visual processing unit 530 as shown in Figure 8 between difference be, description document data entry portion 531, in 2 dimension LUT4, login is based on the description document information SSI that is obtained, from the result of determination SA of spectral discrimination portion 522 with from the determined description document data of the input results SB of input unit 527.The explanation of other each one is because so omission identical with above-mentioned execution mode.
In this visual processing unit 753b,, therefore can carry out more suitable visual processing because description document information SSI not only based on result of determination SA and input results SB, also can be described the selection of file data.
(5)
In each one of the illustrated display unit 720 of above-mentioned execution mode, realize the part of same function, also can realize by common software.
For example, the input part 725 (with reference to Figure 76) of display unit 720 also can be the device of dual-purposes such as input unit 527 of input unit 527, the visual processing unit 753 (with reference to Figure 84) of information input unit 748, the visual processing unit 753b (with reference to Figure 83) of description document information output part 747.
And, the description document data entry portion 526 of the description document data entry device 8 of visual processing unit 753 (with reference to Figure 80), the description document data entry portion 521 of visual processing unit 753a (with reference to Figure 82), visual processing unit 753b (with reference to Figure 83), the description document data entry portion 531 of visual processing unit 753c (with reference to Figure 84) etc., can be the equipment that possesses in the outside of image processing apparatus 723 (with reference to Figure 76), also can realize by for example memory 730 or external device (ED) 740.
And the description document data of logining in each description document data entry portion or description document data entry device both can be to login in each one in advance, also can obtain from external device (ED) 740 or tuner 726.
In addition, each description document data entry portion or description document data entry device, also can with the storage device dual-purpose of storage description document data in look processing unit 746.
In addition, description document information output part 747 also can be the device that the outside of outside by wired or wireless and image processing apparatus 723 or display unit 720 is connected.
(6)
Looking the description document data entry device 8 of processing unit 753 (with reference to Figure 80), the description document data entry portion 521 of visual processing unit 753a (with reference to Figure 82), the description document data entry portion 526 of visual processing unit 753b (with reference to Figure 83), the description document data entry portion 531 of visual processing unit 753c (with reference to Figure 84) etc., also can be can be with the device of the description document information output of the description document data used in the visual processing.
For example, the description document data entry device 8 of visual processing unit 753 (with reference to Figure 80), the description document information output of the description document data that will in 2 dimension LUT4, login.The description document information of being exported for example is imported in the look processing unit 746, is used for selecting the description document data and using at look processing unit 746.
Like this, even under the situation about being used by visual processing unit 753 by the description document data beyond the specified description document data of description document information SSI, look processing unit 746 also can be judged the description document data of using in the visual processing unit 753.Therefore, can prevent that further the image processing in colorful visual processing unit 745 and the look processing unit 746 from becoming processing that repeats respectively or the processing that offsets.
(7)
In image processing apparatus 723, replace description document information output part 747, also can possess user's input part that the user is imported.
Figure 85 represents the image processing apparatus 770 as the variation of image processing apparatus 723 (with reference to Figure 77).Image processing apparatus 770 is characterized in that possessing user's input part 772 that the user is imported.In image processing apparatus 770, about the part of the almost same function of realization and image processing apparatus 723, additional identical symbol omits explanation.
User's input part 772 is to colorful visual processing unit 745 and look processing unit 746 each description document information of output SSI, SCI.
Use Figure 86, be illustrated at user's input part 772.
User's input part 772 is by the part that the user is imported with based on the information of being imported, with the part formation of description document information SSI, SCI output.
The part that the user is imported, the image quality input part 776 that shading value input part 775 that is transfused to by the shading value that the user is like and the image quality that the user is like are transfused to constitutes.
Shading value input part 775, is exported input results with the switch of the state of the light in represented image input, the switch of the state input of the light of the environment of display image etc. is constituted by for example as the 1st input results Sd14.With the switch of the state of the light in shown image input, be used for backlight with for example image the having or not of photoflash lamp with frontlighting or when taking, the state etc. of the macroprogram of use is imported when taking switch.At this, so-called macroprogram is meant the program that is used for according to the State Control filming apparatus of subject.With the switch of the state input of the light of the environment of display image, can be the switch that for example is used for inputs such as the shading value of environment, color temperatures.
Image quality input part 776 is the switches that are used to import the image quality that the user likees, and for example will give tacit consent to and switch that dynamically different visual effect is imported with classics etc.Image quality input part 776 is exported input results as the 2nd input results Sd13.
Based on the part of the information of being imported, constitute by output control part 777 with description document information SSI, SCI output.Output control part 777 obtains the 1st input results Sd14 and the 2nd input results Sd13, with the value output of description document information Sd14 and the 2nd input results Sd13.More particularly, will with description document information SSI, the SCI output of the description document data that are associated between the value of the 1st input results Sd14 and the 2nd input results Sd13.
And then, specifically describe at the action of output control part 777.For example, by shading value input part 775 and image quality input part 776, under the situation of " backlight pattern " input that will " dynamically ", in description document information SSI, will realize that the description document information of the description document data that dark portion improves exports by backlight.On the other hand, in description document information SCI, the description document information output of the description document data that the look that does not carry out the improvement of backlight part is handled makes the image processing optimization as image processing apparatus 770 integral body.
The effect that file produces by image processing apparatus 770 is described below.
In image processing apparatus 770, can realize image processing by carrying out with the corresponding suitable description document data of user's hope.And then, owing to can therefore can prevent that each image processing from becoming the processing of repetition or the processing that offsets to colorful visual processing unit 745 and different description document information SSI, the SCI of look processing unit 746 outputs.And and then, owing to export different description document information SSI, SCI respectively for colorful visual processing unit 745 and look processing unit 746, therefore the description document information SSI that in each device, should consider, the amount of information of SCI can be cut down, the selection of file data can be described more simply.
(8)
Image processing apparatus 723 also can be that the attribute information that is comprised among the received image signal d362 is separated, based on the attribute information after separating, select the description document data, carry out the device of image processing.
(8-1) formation of image processing apparatus 800
Figure 87, expression is as the image processing apparatus 800 of the variation of image processing apparatus 723.Image processing apparatus 800 is characterised in that to possess the separated part 801 with attribute information d380 separation according to received image signal d362, based on the attribute information d380 after separated description document information SSI, SCI is exported.
Image processing apparatus 800 shown in Figure 87 possesses: separated part 801, and it isolates view data d372 and attribute information d380 from received image signal d362; Property determine portion 802, it is exported description document information SSI, SCI based on attribute information d380; The colorful visual processing unit, it carries out visual processing based on input image data d372 and description document information SSI; With look processing unit 746, it carries out look and handles based on colorful visual processing signals d373 and description document information SC1.In addition, about having the part with the essentially identical function of above-mentioned execution mode, additional identical symbol omits explanation.
Separated part 801, it isolates input image data d372 and attribute information d380 from going into picture signal d362.Attribute information d380 is to be configured in the information that the head of received image signal d362 grades, and is the information relevant with the attribute of received image signal d362.Separated part 801 by beginning to read the only received image signal d362 of regulation figure place from the outset, thereby is separated attribute information d380.In addition, attribute information d380 also can be configured in the end of received image signal d362.Perhaps, also can in received image signal d362, follow under the separable state of flag information and dispose.
Figure 88 represents to comprise the example of form of the received image signal d362 of attribute information d380.In the received image signal d362 shown in Figure 88, at the beginning part of data, configuration is as the content information of attribute information d380, simultaneously with disposing input image data d372 thereafter.
Content information is the attribute relevant with the whole content of input image data d372, comprises title, manufacturing company, director, making time, kind, making side's specified attribute of input image data d372 etc.At this, so-called kind is meant the information relevant with the kind of content, for example information such as SF, action movie, photoplay, horror film.So-called making side specified attribute is meant the information relevant with the display characteristic of content production side's appointment, for example action, the such information of feeling of terror.
Property determine portion 802 based on the attribute information d380 that is separated, exports description document information SSI, SCI.
Use Figure 89, describe at the formation of property determine portion 802.Property determine portion 802 possesses attribute test section 806 and attribute input part 805 and output control part 807.
Attribute test section 806 detects the content information that is comprised among the attribute information d380, and as the information of detection Sd3 output.
Attribute input part 805 is to be used to make the user to import the device of content information.Attribute input part 805 obtains detection signal Sd3, appends the information that detection signal Sd3 is comprised information that upgrade or that the information of detection Sd3 does not comprise, and exports as input information Sd4.
At this, attribute input part 805 is the input units that are used for user input content information, and the content information of being imported is exported as input information Sd4.Attribute input part 805 can constitute by for example switch and to the circuit that carries out perception from the input of switch etc., also can be made of the software that the interface of the input usefulness that self shows at display part 721 or attribute input part 805 is operated.And, both can be in built-in and the display unit 720, also can be via the device of input informations such as network.
In addition, in attribute input part 805, can be the content information that comprises according among the information of the detection Sd3, the content information that the control user can import.For example, the kind that detects input image data d372 at attribute test section 806 in attribute input part 805, also can make the project (for example animation director, animation title etc.) that only animation is relevant be transfused to during for " animation ".
Output control part 807 obtains detection signal Sd3 and input information Sd4, and description document information SSI, SCI are exported.
Detailed action about output control part 807 is illustrated.Output control part 807 according to detecting information Sd3 and input information Sd4, obtains the content of attribute information d380.And then the description document data of suitable image processing are carried out in decision for the image with this attribute information d380.For example, output control part 807 with reference to the database that is associated and stores between projects of attribute information d380 and the description document data, determines the description document data.At this, output control part 807 by detection information Sd3 and input information Sd4, under the situation of acquisition about the different value of the content information of identical items, can make arbitrary information as preferential.For example, can preferentially utilize input signal Sd4 all the time.
And then, output control part 807, description document information SSI, the SCI output of at least one side in the parameter information of the feature of the processing of the label information of the number of will comprise the description document data that determined, the description document data that determined being determined etc., the description document data that expression is determined.
The detailed description that relevant description document information SSI, SCI are relevant, since same with above-mentioned execution mode, therefore omit.
In colorful visual processing unit 745 and look processing unit 746, judge the description document data of using in the image processing according to description document information SSI, SCI, carry out image processing.For example, as description document information SSI, when SCI comprises the description document data, use these description document data to carry out image processing.As description document information SSI, when SCI comprises label information, parameter information, use the determined description document data of information by each, carry out image processing.
In addition, output control part 807 also can be exported as description document information SSI, SCI according to the projects that detect the content information that information Sd3 and input information Sd4 obtained.In this case, in colorful visual processing unit 745 or look processing unit 746,, the description document data of using in the image processing are determined, carried out image processing according to description document information SSI, SCI.
(8-2) effect
(1)
Content information during according to content production can use the image processing of suitable description document data.Therefore, the wish that can consider content production side is carried out image processing.
More particularly,, can judge the tendency of the shading value, color temperature etc. of integral image, carry out the image processing that shading value, color temperature etc. to integral image are carried out conversion according to title, manufacturing company etc.And,, can make it to show that the image of making side institute wish shows according to attribute of making side's appointment or the like.
(2)
Property determine portion 802 not only possesses the attribute test section 806 of automatic detection content information, also possesses by manually making the attribute input part 805 of content information input.Therefore,, also can import content information suitably, carry out suitable image processing by attribute input part 805 even in the detection of content information, exist under the situation of unfavorable phenomenon.And then, by attribute input part 805, user side's hobby is reflected in the image processing.For example, according to the image, the image that animation are become strengthen happy atmosphere become distinct image like this, can reflect user side's hobby.Further, can revise the content information of the image that is corrected according to the mode of numeral second edition.
(3)
Can indicate description document information SSI, SIC respectively to colorful visual processing unit 745 and look processing unit 746.Therefore, even specify " action movie and horror film " wait with the situation of a plurality of values as the kind of description document information under, also can be for as action movie, moving more part, carry out visual processing by colorful visual processing unit 745 is suitable, for the part of as horror film, composing with psychological impact, handle by the look processing unit 746 suitable looks that carry out.
In addition, since for colorful visual processing unit 745 and look processing unit 746 each, export different description document information SSI, SCI, therefore can cut down the description document information SSI that should consider in each device, the amount of information of SCI, can be described the selection of file data more simply.
(8-3) variation
(1)
The content information that has obtained also can even reuse.In this case,, also can use the content information of being stored, carry out image processing, the description document data are determined even do not obtain all information once more.
(2)
Image processing apparatus 800 also can be the device that does not possess the either party of attribute input part 805 or attribute test section 806.And separated part 801 might not need fully the inside at image processing apparatus 800.
(3)
So-called description document information SSI and description document information SCI, be not certain needs be different information, also can be identical information.
(4)
Attribute information d380 can content information information in addition.Specifically, if comprise scene properties information, obtain the medium of the relevant broadcast attribute information of medium before the received image signal d362, the record received image signal d362 record attribute information relevant with the shooting attribute information of the environmental correclation that generates received image signal d362, in display unit 720 with machine as the attribute relevant with the part of input image data, and image processing in the description document data the used description document attribute of being correlated with etc.Below, specifically be illustrated about these.
In addition, in the following description, though at attribute information d380 comprise scene properties information, take attribute information, each the situation play in attribute information, record attribute information, the description document attribute information describes respectively, but content information is in these interior information, even information simultaneously whole or that the some of them combination is comprised also is fine in attribute information d380.In this case, the effect that is produced by these information is further improved.
(4-1) scene properties information
(4-1-1)
Figure 90, expression comprises as the scene properties information of the attribute information d380 form at interior received image signal d362.In the received image signal d362 shown in Figure 90, be unit configuration scene properties information with the scene of input image data d372.Scene properties information, for example follow flag information etc. and with the separable state of input image data d372 under dispose.
Scene properties information is with the information of the scene content of description document input image data d372 thereafter.For example, scene properties information, the description document by the combination of such project such as " shading value ", " object ", " action ", " scene summary " is by the combination description document of such project such as " dark with forest and landscape ", " bright, personage, landscape ".In addition, these are an example of scene properties information, are not to be defined in these.For example, as " scene summary ", also can specify such contents such as news, physical culture, domestic play, action movie.
Carry out the image processing apparatus of image processing for comprising scene properties information at interior received image signal d362, the device corresponding with making image processing apparatus 800 and scene properties information is same.
Separated part 801 (reference and 87) is separated attribute information d380 based on the form shown in Figure 90.
Attribute test section 806 (with reference to Figure 89) detects the scene properties information that comprises among the attribute information d380, will detect information Sd3 output.Attribute input part 805 makes the user carry out the input of scene properties information.
Output control part 807 (with reference to Figure 89) obtains detection information Sd3 and input information Sd4, and description document information SSI, SCI are exported.For example, output control part 807, according to the database that is associated between the projects that detect the scene properties information that information Sd3 and input information Sd4 obtained and the description document data etc., determine colorful visual processing unit 745 and look processing unit 746 employed description document data with reference to storage.
The detailed description that description document information SSI, SCI are relevant is because the same so omission explanation with above-mentioned execution mode.In addition, description document information SSI, SCI also can comprise scene properties information.In this case, colorful visual processing unit 745 and look processing unit 746 are selected the employed description document data of image processing from the scene properties information that is obtained, carry out image processing.
In addition, the action of each one of image processing apparatus 800, since same with the situation of attribute information d380 content information, explanation therefore omitted.
(4-1-2)
By the present invention, obtain the same effect of putting down in writing with above-mentioned execution mode of effect.Below, the characteristic effect in this variation is described.
According to scene properties information, can use the image processing of suitable description document data.Therefore, can consider the intention of content production side, carry out image processing.
Scene properties information disposes by each scene of input image data d372 as required.Therefore, changeable in more detail image processing can be carried out image processing more suitably.
For example, by detection information Sd3 and input information Sd4, when obtaining scene properties information " dark and forest and landscape ", output control part 807, to specify the description document information SSI output of " the description document data of improving the dark portion of shade ", to specify the description document information SCI output of " carry out green storage look and proofread and correct, do not carry out the description document data of the storage look correction of the colour of skin " simultaneously.
In addition for example, by detection information Sd3 and input information Sd4, obtain under the situation of scene properties information " bright and personage and feature ", output control part 807, to specify the description document information SSI output of " strengthening personage's dark portion; suppress the description document data that the dark portion of background improves ", will specify the description document information SCI of " not carrying out the description document data that the storage look of the adjustment of white balance and the colour of skin is proofreaied and correct " to export simultaneously.
In addition for example, by detection information Sd3 and input information Sd4, obtain under the situation of scene properties information " personage and drama ", main process object is the personage in image.Thereby, output control part 807, for colorful visual processing unit 745, appointment is carried out the contrast in the low zone of the colour of skin and brightness and is improved, and the contrast of not carrying out the low zone of in addition brightness is improved, with the description document information SSI output of such description document data.Relative therewith, for look processing unit 746, the storage that the colour of skin is carried out in appointment proofread and correct and the correction of storage look correspondences such as green is in addition weakened, the description document information SCI of such description document data exports.
Not only the scene properties information that detects automatically by attribute test section 806 also can be described the selection of file data based on the scene properties information of user's input.Therefore, the person in charge's image quality for being used for is further improved.
And, in personage's mobile scene, background be sunlight under the situation that slowly changes so a series of scene, also can be by the additional scene attribute information of each scene, also can be only at its additional scene attribute information in beginning scene place.And, at first additional scene attribute information in the beginning scene only, the transition information of then also can be in scene continuously only additional and beginning scene shading value relatively or the transition information of object are as scene properties information.By like this, can suppress the flicker in the image processing of dynamic image or the rapid variation of image quality.
(4-2) take attribute information
(4-2-1)
Figure 91 represents to comprise and takes the form of attribute information at interior received image signal d362 as attribute information d380.In the received image signal d361 shown in Figure 91, partly dispose the shooting attribute information at the head of received image signal d362.In addition, take attribute information, be not limited to this, for example also can follow flag information can with state that input image data d372 separates under dispose.
Taking attribute information, is with the information of the shooting state of description document input image data d372 thereafter.For example, take attribute information, by the combination institute description document of such project such as " position and direction ", " date ", " constantly ", " shooting machine information "." position and direction " is the information that is obtained by GPS etc. when taking." shooting machine information " is the information of the machine when taking, the in store information that has or not photoflash lamp, aperture, shutter speed, microshot (close-perspective recording shooting) to have or not etc.For example, also can be the information that the macroprogram that is used for using when taking (be used to make have or not photoflash lamp, aperture, program that the control combination of the speed of opening the door etc. is carried out) is determined.
For comprising the received image signal d362 that takes attribute information, carry out the image processing apparatus of image processing, the device corresponding with taking attribute information with making image processing apparatus 800 is same.
Separated part 801 (with reference to Figure 87) is separated attribute information d380 based on the form shown in Figure 91.
Attribute test section 806 (with reference to Figure 89) detects the shooting attribute information that comprises among the attribute information d380, will detect information Sd3 output.Attribute input part 805 makes the user import the shooting attribute information.
Output control part 807 (with reference to Figure 89) obtains detection information Sd3 and input information Sd4, and description document information SSI, SCI are exported.For example, output control part 807, according to detecting the database that is associated between projects that information Sd3 and input information Sd4 obtain to take attribute information and the description document data etc., determine the description document data of use in colorful visual processing unit 745 and the look processing unit 746 with reference to storage.With description document information SSI, detailed description that SCI is relevant, since same with above-mentioned execution mode, explanation therefore omitted.
In addition, description document information SSI, SCI also can comprise the shooting attribute information.In this case, colorful visual processing unit 745 and look processing unit 746 are selected the employed description document data of image processing from the shooting attribute information that is obtained, carry out image processing.
In addition, the action of each one of image processing apparatus 800, since same with the situation of attribute information d380 content information, explanation therefore omitted.
(4-2-2)
By the present invention, obtain and the same effect of the described effect of above-mentioned execution mode.Below, the characteristic effect in this variation is described.
According to taking attribute information, can use the image processing of suitable description document data.Therefore, can consider the intention of content production side, carry out image processing.
For example, according to such projects such as " position and direction ", " date ", " constantly ", " shooting machine informations ", the information of " direction of the sun ", " season ", " weather " in the environment of acquisition generation input image data d372, " color of sunlight ", " having or not photoflash lamp " etc. can be resolved the shooting state (for example frontlighting still is a backlight etc.) of subject.And then, can use suitable description document data to advance image processing for resolved shooting situation.
Not only by the attribute test section 806 automatic shooting attribute informations that detect, the shooting attribute information based on user's input also can be described the selection of file data.Therefore, the person in charge's image quality for the user is improved more.
(4-3) play attribute information
(4-3-1)
Figure 92 represents to comprise the form of broadcast attribute information as the received image signal d362 of attribute information d380.In the received image signal d362 shown in Figure 92, in the head part of received image signal d362, attribute information is play in configuration.In addition, play attribute information, be not limited to this, for example also can follow flag information etc. can with state that input image data d372 separates under dispose.
Play attribute information, be with in display unit 720, obtain received image signal d362 before the relevant broadcast attribute information of medium, especially, be and by the relevant information of which kind of broadcast form acquisition received image signal d362.For example, in playing attribute information, preserve any the value in expression " wave number word broadcasting on the ground ", " mode is intended broadcasting ", " satellite digital broadcasting ", " Satellite Simulation broadcasting ", " Internet radio " on the ground.
For comprising the image processing apparatus that the received image signal d362 that plays attribute information carries out image processing, the device corresponding with playing attribute information with making image processing apparatus 800 is same.
Separated part 801 (with reference to Figure 87) based on the form shown in Figure 92, is separated attribute information d380.
Attribute test section 806 (with reference to Figure 89) detects the broadcast attribute information that comprises among the attribute information d380, will detect information Sd3 output.Attribute input part 805 makes the user play the input of attribute information.
Output control part 807 (with reference to Figure 89) obtains detection information Sd3 and input information Sd4, and description document information SSI, SCI are exported.For example, output control part 807, according to detecting the database that is associated between broadcast attribute information that information Sd3 and input information Sd4 obtained and the description document data etc., determine colorful visual processing unit 745 and look processing unit 746 employed description document data with reference to storage.With description document information SSI, detailed description that SCI is relevant, since same with above-mentioned execution mode, explanation therefore omitted.
In addition, description document information SSI, SCI also can comprise the broadcast attribute information.In this case, colorful visual processing unit 745 and look processing unit 746 from the broadcast attribute information that is obtained, are selected the employed description document data of image processing, carry out image processing.
In addition, the action of each one of image processing apparatus 800, since same with the situation of attribute information d380 content information, explanation therefore omitted.
(4-3-2)
By the present invention, obtain the effect same effect illustrated with above-mentioned execution mode.Below, the characteristic effect in the notebook variation.
According to playing the image processing that attribute information can use suitable description document data.For example, broadcast path is proofreaied and correct the influence that image produces, can be considered the intention of broadcasting station side, carry out image processing.
More particularly, for example, for intending the image that broadcasting, Satellite Simulation broadcasting etc. are obtained by the ground mode, the noise when carrying out will not transmitting is strengthened the selection for excessive description document data.Like this,, can use the brightness that keeps the night scene zone to carry out the description document data of brightization of subject simultaneously, carry out image processing etc. for the image that in night scene, has subject.
Not only by the attribute test section 806 automatic broadcast attribute informations that detect, the broadcast attribute information based on user's input also can be described the selection of file data.Therefore, the person in charge's image quality for the user is improved more.
(4-4) record attribute information
(4-4-1)
Figure 93 represents to comprise the form of record attribute information as the received image signal d362 of attribute information d380.In the received image signal d362 shown in Figure 93, at the head part configuration record attribute information of received image signal d362.In addition, record attribute information is not limited only to this, for example, can also be to dispose following under flag information and the separable state of input image data d372.
Record attribute information is the medium information relevant with device of record received image signal d362.For example, record attribute information, comprise " age ", recording medium and the device of record received image signal d362 " producer is provided ", be used for " product information " that recording medium and device are determined etc.
Carrying out the image processing apparatus of image processing for the received image signal d362 that comprises record attribute information, is to make image processing apparatus 800 device corresponding with record attribute information same.
Separated part 801 (with reference to Figure 87) is separated attribute information d380 based on the form shown in Figure 93.
Attribute test section 806 (with reference to Figure 89) detects the record attribute information that comprises among the attribute information d380, will detect information Sd3 output.Attribute input part 805 makes the user carry out the input of record attribute information.
Output control part 807 (with reference to Figure 89) obtains detection information Sd3 and input information Sd4, and description document information SSI, SCI are exported.For example, output control part 807, with reference to storage according to detecting the database that is associated between record attribute information that information Sd3 and input information Sd4 obtained and the description document data etc., decision colorful visual processing unit 745 and look processing unit 746 employed description document data.With description document information SSI, detailed description that SCI is relevant, since same with above-mentioned execution mode, explanation therefore omitted.
In addition, description document information SSI, SCI also can comprise record attribute information.In this case, colorful visual processing unit 745 and look processing unit 746 are selected the employed description document data of image processing from the record attribute information that is obtained, carry out image processing.
In addition, the action of each one of image processing apparatus 800, since same with the situation of attribute information d380 content information, explanation therefore omitted.
(4-4-2)
By the present invention, obtain the effect same effect illustrated with above-mentioned execution mode.Below, the characteristic effect in this variation is described.
According to record attribute information, use suitable description document data to carry out image processing.For example, " manufacturer is provided ", be to check colors to handle under the situation of the camera producer handle specially etc., less carry out the mode that look is handled according to look processing unit 746, description document information SCI is exported.In addition,, carry out the mode that look is handled, description document information SCI is exported according to the characteristic in the existing zone of the color table of considering filter for example for the input image data d372 that is write down by filter etc.Like this, recording medium and tape deck are proofreaied and correct the influence that image produces, just can be considered the intention of the side of making, carry out image processing.
Not only the record attribute information that detects automatically by attribute test section 806 can also be described the selection of file data based on the record attribute information of user's input.Therefore, the person in charge's image quality for the user is improved more.
(4-5) description document attribute information
(4-5-1)
Figure 94 represents to comprise the form of description document attribute information as the received image signal d362 of attribute information d380.In the received image signal d362 shown in Figure 94, at the head part configuration describing document attribute information of received image signal d362.In addition, the description document attribute information is not limited to this, for example, also can be configured following under flag information etc. and the separable state of input image data d372.
The description document attribute information is to be used for information that the description document data are determined, for example, is to be used for information that the description document data of the recommendations such as filming apparatus that generate input image data d372 are determined.The description document attribute information, at least a in the parameter information of the feature of the label information of the number of comprise the description document data, the description document data being determined etc., the processing of expression description document data.Description document data, label information, parameter information, with put down in writing when the explanation of description document information SSI, SCI in the above-described embodiment same.
The description document data that the description document attribute information is determined are to be used for carrying out the description document data that subsequent images is handled any image processing of (a)~image processing (c).Image processing (a) is a kind of in the middle of the filming apparatus that generates input image data d372 etc., judges for the suitable image processing of input image data d372.Image processing (b) is except that image processing (a), is used for the image processing of image processing that the difference of the characteristic between the display unit of the display part of filming apparatus and standard module is proofreaied and correct.Image processing (c) is except that image processing (a), is used for the image processing of image processing that the difference of the display part of filming apparatus and the characteristic between the display unit 720 (with reference to Figure 76) is proofreaied and correct.
And then whether the description document attribute information comprises the input image data d372 that comprises among the received image signal d362 and is in filming apparatus etc. by the relevant marks for treatment information of the data after the image processing.
Carry out the image processing apparatus of image processing for the received image signal d362 that comprises the description document attribute information, the device corresponding with making image processing apparatus 800 and description document attribute information is same.
Separated part 801 (with reference to Figure 87) based on the form shown in Figure 94, is separated attribute information d380.
Attribute test section 806 (with reference to Figure 89) detects the description document attribute information that comprises among the attribute information d380, will detect information Sd3 output.Attribute input part 805 makes the user be described the input of file attribute information.
Output control part 807 (with reference to Figure 89) obtains detection information Sd3 and input information Sd4, and description document information SSI, SCI are exported.Description document information SSI, SCI are no matter the form of description document attribute information (any in description document data, label information, the parameter information) how, with any the information output in description document data, label information, the parameter information.
Below, explain detailedly at the action of output control part 807.
Whether output control part 807 is judged according to detecting in the description document attribute information that information Sd3 or input information Sd4 obtained, to the information that the description document data are determined, directly exported as description document information SSI, SCI.
For example, specifying under the description document data conditions, by input information Sd4 no matter the description document attribute information how, all is judged as " output ".
For example,, comprise the information that the description document data of carrying out image processing (a) or image processing (c) are determined, under the situation of marks for treatment information representation " non-processor ", be judged as " output " at the description document attribute information.
Under the situation in addition, all be judged as " not exporting ".
For example, at the description document attribute information, comprise the information that the description document data of carrying out image processing (a) are determined, under the situation that the marks for treatment information representation " has processing ", output control part 807, the information that to determine the description document data that do not allow colorful visual processing unit 745 and look processing unit 746 carry out image processing is exported as description document information SSI, SCI.
For example, at the description document attribute information, comprise the information that the description document data of carrying out image processing (b) are determined, under the situation of marks for treatment information representation " non-processor ", except that image processing (a), will be used for information that the description document data of carrying out image processing are determined, export as description document information SSI, SCI, this image processing is for being used for that the display unit of standard module and the difference of the characteristic between the display unit 720 are proofreaied and correct.
For example, at the description document attribute information, comprise the information that the description document data of carrying out image processing (b) are determined, marks for treatment information, under the situation of expression " processing is arranged ", will be used for information that the description document data of carrying out image processing are determined, export as description document information SSI, SCI, this image processing is for being used for that the display unit of standard module and the difference of the characteristic between the display unit 720 are proofreaied and correct.
For example, at the description document attribute information, comprise the information that the description document data of carrying out image processing (c) are determined, under the situation that the marks for treatment information representation " has processing ", output control part 807, for colorful visual processing unit 745 and look processing unit 746, to be used for information that the description document data of carrying out image processing are determined, export as description document information SSI, SCI, this image processing is for being used for that the display part of filming apparatus and the difference of the device characteristics between the display unit 720 are proofreaied and correct.
In addition, these processing, just an example is not to be defined in these.
In addition, the action of each one of image processing apparatus 8000, since same with the situation of attribute information d380 content information, explanation therefore omitted.
(4-5-2)
By the present invention, obtain and the same effect of the described effect of above-mentioned execution mode.Below, characteristic effect of the present invention is described.
According to the description document attribute information, can use the image processing of suitable description document data.For example, can use the image processing of the description document data of recommending by shooting side.And then, can carry out the close demonstration of confirming by the display part of shooting side of image.Therefore, can consider the intention of the side of making, carry out image processing.
Description document attribute informations that detect automatically by attribute test section 806 not only, the description document attribute information based on the user has imported also can be described the selection of file data.Therefore, the person in charge's image quality for the user is improved more.
(the 11st execution mode)
Use Figure 95~Figure 103, describe at filming apparatus 820 as the 11st execution mode of the present invention.
The filming apparatus that filming apparatus 820 shown in Figure 95 is the still camera that carries out the shooting of image, video camera etc., take image.Filming apparatus 820 is characterized in that having the image processing apparatus 832 that comprises the illustrated visual processing unit of above-mentioned execution mode, by the description document data of using in automatic or the visual processing of manual switchover.In addition, filming apparatus 820 can be an independent device, also can be the device that possesses in the portable information terminal of portable telephone, PDA, PC etc.
(filming apparatus 820)
Filming apparatus 820 possesses: shoot part 821, image processing apparatus 832, display part 834, CPU846, Lighting Division 848, input part 850, safe detection unit 852, coder 840, storage control 842, memory 844, external interface (I/F) 854, external device (ED) 856.
Shoot part 821 is shootings of carrying out image, and the part with received image signal d362 output is made of lens 822, aperture and shutter portion 824, CCD826, amplifier 828, A/D transformation component 830, CCD control part 836, information test section 838.
Lens 822 are the lens that the image of subject carried out imaging on CCD826.Aperture and shutter portion 824 be used to change through the light beam of lens 822 pass through scope or by the time, the mechanism of control exposure.CCD826 is that the image to subject carries out light-to-current inversion, and as the imageing sensor of picture signal output.Amplifier 828 is to be used for the device that will amplify from the picture signal of CCD826 output.A/D converter 830 is the analog picture signals that amplify by amplifier 828, is transformed into the device of data image signal.CCD control part 836 is devices that the moment of driven CCD 826 is controlled.Information test section 838 is according to data image signal, information such as automatic concern, aperture, exposure is detected, to the device of CPU846 output.
Image processing apparatus 832 be with (the 10th execution mode) in use the identical image processing apparatus of the illustrated image processing apparatus of Figure 77 723.Image processing apparatus 832, reception is from the control of CPU846, carry out the image processing of the input image data d372 (with reference to Figure 96) that comprises among the received image signal d362, will comprise the device of the output image signal d361 output of output image data d371 (with reference to Figure 96).Image processing apparatus 832 comprises the illustrated visual processing unit of above-mentioned execution mode, carries out this point of image processing in use description document data and has feature.Detailed formation, back use Figure 96 to describe.
Display part 834 is with the output image signal d361 by image processing apparatus 832 outputs, carries out device shown by for example thumbnail.Display part 834 morely is made of LCD, but so long as PDP, CRT, projecting apparatus etc. carry out device shown to image then is not defined.In addition, display part 834 not only is built in the filming apparatus 820, also can be by wired or wireless connections such as network.And display part 834 also can be connected with CPU846 through image processing apparatus 832.
CPU846, be connected via image processing apparatus 832, coder 840, storage control 842, exterior I/F854 and bus, the testing result of information test section 838, the input results by input part 850, the illuminated message by Lighting Division 848, the result of determination by safety detection unit 852 etc. are received, carry out lens 822, aperture and shutter portion 824, CCD control part 836, image processing apparatus 832, Lighting Division 848, input part 850, safe detection unit 852 simultaneously or the device of the control of each one of being connected with bus etc.
Lighting Division 848 is to send to the photoflash lamp of the illumination light of subject irradiation etc.
Input part 850 is to be used to make the user to carry out user interface to the operation of filming apparatus 820, is the key that is used to carry out the control of each one, knob, remote control etc.
Safe detection unit 852 is that the security information that obtains from the outside is judged, carries out the part of the control of image processing apparatus 832 via CPU.
Coder 840 is to compress the compressor circuit of processing from the output image signal d361 of image processing apparatus 832 by JPEG or MPEG etc.
Storage control 842 carries out the address of memory 844 of the CPU that is made of DRAM etc. or visit control constantly etc.
Memory 844 is made of DRAM etc., uses with storage as operation when image processing etc.
Exterior I/F854, be to be used for to external device (ED)s 856 such as storage card 859, PC861, compress output image signal d361 output after the processing with output image signal d361 or by coder 840, obtain as the description document information of the information relevant simultaneously etc. with the description document data that are used to carry out image processing, and as received image signal d362, to the interface of image processing apparatus 832 outputs.Description document information, with " the 10th execution mode " illustrated same, exterior I/F854 is made of for example storage card IF/858, PCII/F860, network I/F862, wireless I/F864 etc.In addition, exterior I/F854 does not need to possess at these illustrative all parts.
Storage card I/F858 is used for the interface that the storage card 859 with recording image data or description document information etc. is connected with filming apparatus 820.PCI/IF860 is used for the interface that the PC861 of the external mechanical of the personal computer etc. with recording image data or description document information etc. is connected with filming apparatus 820.Network I/F862 is to be used for filming apparatus 820 is connected with network, the interface of transmitting-receiving view data or description document information etc.Wireless I/F864 is used for via WLAN etc. filming apparatus 820 being connected with external mechanical the interface of transmitting-receiving view data or description document information etc.In addition, exterior I/F854 is not limited to diagram, also can be that for example USB, optical fiber etc. are used for the interface that is connected with filming apparatus 820.
(image processing apparatus 832)
The formation of Figure 96 presentation video processing unit 832.Image processing apparatus 832 has the formation same with figure image processing apparatus 723.In Figure 96, about with the part of the same function of image processing apparatus 723, additional identical symbol.
Image processing apparatus 832 possesses: colorful visual processing unit 745, and it carries out colorful visual for input image data d372 and handles; Look processing unit 746, it carries out look and handles for the colorful visual processing signals d373 as the output of colorful visual processing unit 745; With description document information output part 747, it will be used for description document information SSI, SCI that the description document data that colorful visual is handled and the look processing is used are determined are exported.
The action of each one owing to illustrated in (the 10th execution mode), is therefore omitted detailed explanation.
In addition, in (the 10th execution mode), put down in writing the environmental information that description document information SSI, SCI comprise, be " showing by the view data after the image processing information relevant " with visual environment.It also can be the information with the environmental correclation of taking.
(effect of filming apparatus 820)
Filming apparatus 820 possesses the same image processing apparatus 832 of (the 10th execution mode) illustrated image processing apparatus 723.Therefore, can realize and the same effect of display unit 720 (with reference to Figure 76) that possesses image processing apparatus 723.
(1)
In filming apparatus 820, possess description document information output part 747 (with reference to Figure 78), can use the image processing of the description document data that are suitable for the environmental information that obtained.Especially, because automatic detected environmental information not only, also can be described the selection of file data, therefore can carry out the high image processing of visual effect for the user based on the environmental information of user's input.
Use question blank as the description document data conditions under owing to can carry out image processing, can realize that therefore high-speed image handles by the reference of form.
In filming apparatus 820, by the description document data are changed, thereby realize different image processing.Both, do not change the hardware formation and just realize different image processing.
In the image processing of using the description document data,, therefore can realize complex image processing easily owing to can generate the description document data in advance.
(2)
In the description document information output part 747 of image processing apparatus 832,, different description document information can be exported for colorful visual processing unit 745 and look processing unit 746 each.Therefore, can prevent the processing that colorful visual processing unit 745 and each image processing in the look processing unit 746 repeat, the perhaps processing that offsets of effect.That is, can handle suitably by image processing apparatus 832.
(3)
Filming apparatus 820 possesses display part 834, on one side can confirm by the image after the image processing, Yi Bian take.Therefore, can make the impression of the image when taking, the impression during with the image that shows after taking is approaching.
(variation)
In filming apparatus 820, in the above-described embodiment, can have and image processing apparatus 723 or the relevant same distortion of putting down in writing of visual processing unit 753 (with reference to Figure 79).Below, the characteristic variation in the filming apparatus 820 is described.
(1)
In the explanation of (the 10th execution mode), the information input unit 748 (with reference to Figure 78) of description document information output part 747 being described, is the input unit that is used for user's input environment information.
In filming apparatus 820, information input unit 748 except that environmental information, after perhaps changing, also can be the device that can import out of Memory.For example, information input unit 748 can be shading value or the such user's input information of image quality that can import user preferences.
In description document information output part 747, except that information input unit 748, after perhaps changing, also can possess (the 10th execution mode) (variation) (7) described user's input parts 772 (with reference to Figure 86) as this variation.The detailed description of user's input part 772 is because above-mentioned execution mode has illustrated therefore omission.
Output control part 750 (with reference to Figure 78) as the description document information output part 747 of this variation, based on the environmental information that the user's input information of being imported from user's input part 772 and environment measuring portion 749 are detected, description document information SSI, SCI are exported.More particularly, as the output control part 750 of this variation, the database of the description document data that are associated with reference to the value with the value of user's input information and environmental information etc. are exported description document information SSI, SCI.
Like this, in filming apparatus 820, just can realize image processing by carrying out with the corresponding suitable description document data of user's hobby.
(2)
In each one of the illustrated filming apparatus 820 of above-mentioned execution mode, realize the part of same function, also can realize by common software.
For example, the input part 850 (with reference to Figure 95) of filming apparatus 820 also can be with the information input unit 748 of description document information output part 747, as the device of the dual-purposes such as input unit 527 of the input unit 527 of user's input part 772 of the description document information output part 747 of variation, visual processing unit 753b (with reference to Figure 83), visual processing unit 753c (with reference to Figure 84).
And, the description document data entry portion 526 of the description document data entry device 8 of visual processing unit 753 (with reference to Figure 80), the description document data entry portion 521 of visual processing unit 753a (with reference to Figure 82), visual processing unit 753b (with reference to Figure 83), the description document data entry portion 531 of visual processing unit 753c (with reference to Figure 84) etc., can be the equipment that the outside of image processing apparatus 832 has, also can realize by for example memory 844 or external device (ED) 856.
And the description document data of logining in each description document data entry portion or description document data entry device both can be to login in each one in advance, also can obtain from external device (ED) 856.
In addition, each description document data entry portion or description document data entry device, also can with the storage device dual-purpose of storage description document data in look processing unit 746.
In addition, description document information output part 747 also can be the device that the outside of outside by wired or wireless and image processing apparatus 832 or filming apparatus 820 is connected.
(3)
The image processing apparatus 832 of filming apparatus 820, the also description document information that the description document data that are used for image processing is used can be determined is as input image data d372 or the device exported together with output image signal d361 by the input image data d372 after the image processing.
About this, use Figure 97~Figure 101 to describe.
(3-1) formation of image processing apparatus 886
Use Figure 97, the formation as the image processing apparatus 886 of variation is described.Image processing apparatus 886, it is the image processing of carrying out input image data d372, result is presented in the display part 834, simultaneously the device that the additional description document information d401 that is suitable for the description document data of image processing also exports in input image data d372.
Image processing apparatus 886 possesses: colorful visual processing unit 888, look processing unit 889, recommendation description document information extraction portion 890 and description document information appendix 892.
Colorful visual processing unit 888, have and (the 10th execution mode) illustrated same substantially function of colorful visual processing unit 745, with colorful visual processing unit 745 similarly, carry out the visual processing of input image data d372, colorful visual processing signals d373 is exported.
Difference between colorful visual processing unit 888 and the colorful visual processing unit 745 is, the visual processing unit that colorful visual processing unit 888 has, be with visual processing unit 1 (with reference to Fig. 1), visual processing unit 520 (with reference to Fig. 6), visual processing unit 525 (with reference to Fig. 7), visual processing unit 530 (with reference to Fig. 8) in any same substantially visual processing unit, and visual processing unit will recommend description document information SSO to export this point.About recommending the details of description document information SSO, narration later on.
Look processing unit 889 has and (the 10th execution mode) illustrated same substantially function of look processing unit 746, with look processing unit 746 similarly, carry out colorful visual processing signals d373, look handle, output image data d371 is exported.
Difference between look processing unit 889 and the look processing unit 746 is, look processing unit 889 is with the description document information of the description document data of use in the look processing, as recommending description document information SCO output.About recommending the details of description document information SCO, narration later on.
Recommend description document information extraction portion 890, extract and recommend description document information SSO, SCO, with these information as description document information d401.
Description document information appendix 892 is added description document information d401 for input image data d372, and exports as output image signal d361.
Figure 98 represents form one example of the output image signal d361 behind the description document information appendix 892 additional description document information d401.
Among Figure 98 (a), at the configuration describing document information d401 of beginning portion of output image signal d361, with disposing input image data d372 thereafter.In such form, use the description document information d401 of beginning portion, carry out the image processing of all input image data d372.Therefore, description document information d401 as long as be configured in 1 place in output image signal d361, can cut down the ratio of description document information d401 shared among the output image signal d361.
Among Figure 98 (b), for being divided into a plurality of each input image data d372, configuration describing document information d401.In such form, in each image processing of divided input image data d372, use different description document data.Therefore, for example can use the image processing of description document data, can carry out image processing definitely by each scene of input image data d372.
In addition, under the situation of a series of scene of continually varying, at first, additional scene attribute information in the beginning scene, then will be only in a plurality of scenes and the transition information of beginning scene shading value relatively or the transition information of object, as scene properties information, carry out the flicker under the situation of image processing or the rapid variation of image quality for dynamic image thereby can be suppressed at generation.
(3-2) recommend description document information SSO, SCO
Recommend description document information SSO, SCO, be to be used for information that each description document data are determined, at least one in the parameter information of the feature of the label information of the number of comprise the description document data, the description document data being determined etc., the processing of expression description document data.Description document data, flag information, parameter information, with put down in writing by the explanation of description document information SSI, SCI identical.
And, recommend description document information SSO, the determined description document data of SCO, be to carry out the description document data that subsequent images is handled the arbitrary image processing in (a)~image processing (c).Image processing (a) is that colorful visual processing unit 888 is judged as the visual processing that is fit to for input image data d372, and perhaps look processing unit 889 is judged as the look processing that is fit to for colorful visual processing signals d373.At this, in image processing (a), the image processing of " being judged as suitable " is the image processing of for example using respectively in colorful visual processing unit 888 and the look processing unit 889.Image processing (b) is except that image processing (a), is used for the image processing of image processing that the difference of the characteristic between the display unit of the display part 834 of filming apparatus 820 and standard module is proofreaied and correct.Image processing (c) is except that image processing (a), is used for the display part 834 of filming apparatus 820 and shows the image processing that the difference of the characteristic between the display unit of the image that filming apparatus 920 is captured is proofreaied and correct.
Colorful visual processing unit 888 and look processing unit 889, under the situation of the display characteristic of the display part 834 of unclear image when being used to confirm to take, to carry out the description document information of the description document data of image processing (a), as recommending description document information SSO, SCO.
Colorful visual processing unit 888 and look processing unit 889, though knowing the display characteristic of demonstration by the display part 834 of the captured image of filming apparatus 820, but do not know to show under the situation of display characteristic of display unit (for example be used to show take with the display unit 720 of the image of record etc.) of the image that filming apparatus 820 is captured, to carry out the description document information of the description document data of image processing (b), as recommending description document information SSO, SCO.
Colorful visual processing unit 888 and look processing unit 889, the display characteristic of knowing demonstration display part 834 of captured image by filming apparatus 820, with the situation of the display characteristic of the display unit that shows the image that filming apparatus 820 is taken (for example, taking the display unit 720 that shows with the image that writes down etc.) under, to carry out the description document information of the description document data of image processing (c), as recommending description document information SSO, SCO.
In addition, above processing is an example, and selected image processing is not limited to this under the various situations.
(3-3) effect of image processing apparatus 886
In image processing apparatus 886, will comprise the output image signal d361 output of description document information d401.Therefore, in the device that obtains output image signal d361, when the image processing of the input image data d372 that comprises output image signal d361, can use the image processing of suitable description document data.
In addition, description document information d401 comprises any description document information of description document data of carrying out in image processing (a)~image processing (c).Therefore, for example, can make the image confirmed in the display part 834 of filming apparatus 820, approaching with the shown image of the display unit that obtains output image signal d361.Promptly, in the display unit of the description document information of the description document data that obtained to carry out image processing (b), by carry out image processing (b) for output image signal d361, carry out simultaneously to and the display unit of standard module between the difference image processing of proofreading and correct, thereby can make display image approaching with the image of being confirmed by display part 834.In addition, in the display unit of the description document information of the description document data that obtained to carry out image processing (c), by carrying out image processing (c) for output image signal d361, thereby can make display image approaching with the image of being confirmed by display part 834.
(3-4) variation
Output image signal d361, also can further comprise the input image data d372 that comprised about output image signal d361 whether in image processing apparatus 886 by image processing the processing flag information of data.Like this, obtain the display unit of output image signal d361, whether the input image data d372 that can comprise at output image signal d361 is to be judged by the data after the image processing.Therefore, can prevent the image processing excessive in display unit or the image processing of neutralization effect.
(2) image processing apparatus
In the explanation of above-mentioned image processing apparatus 886, " description document information appendix 892 is added description document information d401 and output in input image data d372 " has been described.
At this, description document information addition portion 892 also can be added description document information d401 in the output image data d371 as the result after input image data d372 is carried out image processing, and output.
Figure 99 represents the image processing apparatus 894 as the variation of image processing apparatus 886.At the part of realization with each identical functions of image processing apparatus 886, additional identical symbol.Image processing apparatus 894 shown in Figure 99 is characterized in that, in description document information appendix 892, for output image data d371, adds description document information d401.
And, in the image processing apparatus 894 of Figure 99, recommending description document information SSO, the determined description document data of SCI, is to be used for carrying out any description document data of image processing that subsequent images is handled (a ')~image processing (c ').Image processing (a '), be that colorful visual processing unit 888 is judged as the visual processing that is fit to for input image data d372, perhaps look processing unit 889 is judged as the look processing that is fit to for colorful visual processing signals d373.At this, in image processing (a '), the image processing of " being judged as suitable " is the image processing of for example using respectively in colorful visual processing unit 888 and the look processing unit 889.Image processing (b ') is to be used for image processing that the difference of the characteristic between the display unit of the display part 834 of filming apparatus 820 and standard module is proofreaied and correct.Image processing (c ') is to be used for the display part 834 of filming apparatus 820 and to show the image processing that the difference of the characteristic between the display unit of the image that filming apparatus 920 is captured is proofreaied and correct.
About the action of other each one, omit explanation.
In image processing apparatus 894, for example obtain to carry out above-mentioned image processing (a ') the description document data description document information display unit, be by carrying out image processing (a ') thus inverse transformation input image data d372 is regenerated.And, in the display unit of the description document information of the description document data that obtained to carry out image processing (a '), also can send and after this not carry out the indication that above colorful visual is handled or look is handled.And, display unit in the description document information of the description document data that obtained to carry out above-mentioned image processing (b '), by carry out to and the display unit of standard module between the difference image processing of proofreading and correct, thereby can make display image approaching with the image of being confirmed by display part 834.In addition, obtained to carry out the display unit of description document information of the description document data of above-mentioned image processing (c '), by carrying out image processing (c '), thereby made display image approaching with the image of being confirmed by display part 834.
In addition, in image processing apparatus 894, output image signal d361 output that also can above-mentioned with comprising (4-1) described processing flag information.Like this, obtain the display unit of output image signal d361, judge the output image data d371 that output image signal d361 is comprised,, can prevent the image processing that image processing excessive in display unit or effect offset for by the data after the image processing.
(3) image processing apparatus
Above-mentioned image processing apparatus 886 and image processing apparatus 894, possess the same user's input part 897 of (the 10th execution mode) (variation) (7) described user's input parts 772 (with reference to Figure 86), also can be to use the input at family to be reflected in device in the selection of description document data.
Figure 100~Figure 101, expression possesses the image processing apparatus 896 and the image processing apparatus 898 of user's input part 897.The action of user's input part 897, since same with the action of (the 10th execution mode) (variation) (7) described user's input part 772, detailed explanation therefore omitted.
In picture processing unit 896 and image processing apparatus 898, colorful visual processing unit 888, be with visual processing unit 753 (with reference to Figure 80), visual processing unit 753a (with reference to Figure 82), visual processing unit 753b (with reference to Figure 83), visual processing unit 753c (with reference to Figure 84) in any same substantially visual processing unit, and, possess and can will recommend the visual processing unit of description document information SSO output.That is, obtain description document information SSI, can will recommend description document information SSO output simultaneously from user's input part 897.
And in image processing apparatus 896 and image processing apparatus 898, look processing unit 889 can obtain description document information SCI from user's input part 897, will recommend description document information SCO output simultaneously.
Like this,, in the image processing apparatus 898, can see the image that in display part 834, shows, make the description document data optimization that uses in the image processing when taking simultaneously at image processing apparatus 896.At this moment, owing to description document information SSI, SCI can be composed to colorful visual processing unit 888 and look processing unit 889, can prevent that therefore the effect of the processing in each device is excessive, perhaps effect offsets.In addition, by user's input part 897, can carry out the adjustment of more delicate image processing.Further,, therefore can cut down the process information in each device, handle more simply by only giving description document information SSI, the SCI that colorful visual processing unit 888 and look processing unit 889 need.
(4)
(4-1)
In filming apparatus 820, image processing apparatus 832 (with reference to Figure 96) can be to obtain security information, switches the device of the description document data of using in the image processing according to security information.At this, whether so-called security information is to be illustrated in to permit in the shooting environmental of filming apparatus 820 and take or the information of the degree of this permission.
Figure 102, expression is as the image processing apparatus 870 of the variation of image processing apparatus 832.Image processing apparatus 870 in the image processing of carrying out input image data d372, is exported this point with output image data d371, and is same with image processing apparatus 832.Difference between image processing apparatus 870 and the image processing apparatus 832 is to possess the security information input part 872 that obtains security information in the environment that image processing apparatus 870 is taken.In addition, about the part common with image processing apparatus 832, additional identical symbol omits explanation.
Security information input part 872, by main compositions such as receiving systems, this receiving system for by for example making the user directly import the input unit of security information, wireless, infrared ray or wired, obtains security information.And then security information input part 872 based on the security information that is obtained, is exported description document information SSI, SCI.
At this, description document information SSI, SCI, be respectively to be used for information that the description document data are determined, at least one in the parameter information of the feature of the information of description document data, number that the description document data are determined etc., the processing of expression description document data.Description document data, label information, parameter information are with illustrated same of above-mentioned execution mode.
The description document information SSI, the SCI that are exported, when the degree of the shooting permission of representing in security information is high more, the description document data of carrying out the more shooting of high image quality are determined, when the degree of taking permission is low more, the description document data of the shooting of carrying out lower image quality are determined.
Use Figure 103 further to be described in detail at the action of image processing apparatus 870.
Figure 103 is the key diagram that is used for describing at the action of controlling the filming apparatus of taking 820 shooting control area 880, that possess image processing apparatus 870.
In taking control area 880, dispose forbid taking forbid taking thing 883.What is called forbids taking thing, is object of for example personage, books etc., portraiture right or copyright etc. etc.In taking control area 880, be provided with security information dispensing device 881.Security information dispensing device 881 sends security information by wireless, infrared ray etc.
Take the filming apparatus 820 in the control area 880, receive security information by security information input part 872.The program of the shooting permission that security information input part 872, judgement security information are represented.Further, security information input part 872, take the database that is associated between value and the description document data of degree of permission etc. with reference to storage, will be used for description document information SSI, SCI that the corresponding description document data of value of the degree of taking permission are determined are exported.For example, in database, the value for higher shooting permission is associated with the description document data of carrying out the more shooting of high image quality.
In more detail, for example, filming apparatus 820 receives under the situation of taking the low security information of permission degree from security information dispensing device 881, security information input part 872, to be used for near the picture centre or the description document information SSI that determines of the such description document data of the main region smoothing of image (perhaps reducing gray scale), to 745 outputs of colorful visual processing unit.And then security information input part 872 will be used for the description document information SCI to the description document data of image netrual colourization are determined, to 746 outputs of look processing unit.Like this, can't take, can protect portraiture right or copyright with suitable image quality.
(4-2) other
(1)
Received the security information input part 872 of security information,, the part of functions of image processing apparatus 870 or filming apparatus 820 has been stopped not only according to the changeable description document data of security information.
(2)
Received the security information input part 872 of security information, and then obtain user's authentication information from the input part 850 of filming apparatus 820 etc., the user of permission if be taken then can be with description document information SSI, the SCI output that the description document data that relax the degree of taking permission are determined.
User's authentication information is for example according to the identifying information of identifications such as user's fingerprint and iris.Obtained the security information input part 872 of this authentication information,, judged whether the user who is identified is the user who is taken and permits with reference to the user's data storehouse of the permission that is taken.In addition, at this moment, also can take the degree of permission according to user's judgements such as cost information, this degree is high more, then can carry out the more shooting of high image quality.
In addition, security information can be notified the information determined of filming apparatus 820 that is used for the permission that is taken.
(3)
Description document information SSI, SCI can comprise security information.In this case, obtained colorful visual processing unit 745 and the look processing unit 746 of description document information SSI, SCI,, selected the description document data based on security information.
(4)
Security information input part 872, also can with safety detection unit 852 dual-purposes.
(the 1st remarks)
The present invention's (especially described invention of the 4th~the 7th execution mode) also can have performance shown below.In addition, in the remarks of the described attached form of this column ([the 1st remarks]), be to be subordinated to the described remarks of the 1st remarks.
(contents of the 1st remarks)
(remarks 1)
A kind of visual processing unit possesses:
Image region segmentation mechanism, the picture signal that will be transfused to is divided into a plurality of image-regions;
Greyscale transformation characteristic export agency, it is a kind of mechanism that each above-mentioned image-region is derived the greyscale transformation characteristic, use becomes the gamma characteristic between the ambient image regions of the object image area of derived object of above-mentioned greyscale transformation characteristic and above-mentioned object image area, and the above-mentioned greyscale transformation characteristic of above-mentioned object image area is derived; With
The gray scale processing mechanism, it carries out the gray scale of above-mentioned picture signal and handles based on the above-mentioned greyscale transformation characteristic that is derived.
(remarks 2)
According to remarks 1 described visual processing unit,
Above-mentioned greyscale transformation characteristic is a gray-scale transformation curve,
Above-mentioned greyscale transformation characteristic export agency has: histogram is made mechanism, and it uses above-mentioned gamma characteristic to make histogram; Make mechanism with grey scale curve, it makes above-mentioned gray-scale transformation curve based on the above-mentioned histogram of made.
(remarks 3)
According to remarks 1 described visual processing unit,
Above-mentioned greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that above-mentioned picture signal carried out the gray scale processing 1 greyscale transformation form,
Above-mentioned gray scale processing mechanism has above-mentioned a plurality of greyscale transformation form as 2 dimension LUT.
(remarks 4)
According to remarks 3 described visual processing unit,
Above-mentioned 2 dimension LUT, in all values of above-mentioned picture signal, with the value of corresponding with the value of the above-mentioned selection signal above-mentioned picture signal after being handled by gray scale, dull increase or the dull order that reduces is preserved above-mentioned a plurality of greyscale transformation form.
(remarks 5)
According to the visual processing unit shown in remarks 3 or 4,
Above-mentioned 2 dimension LUT can change by the login of description document data.
(remarks 6)
According to each the described visual processing unit in the remarks 3~5,
The value of above-mentioned selection signal is derived as the characteristic quantity of the independent selection signal of the selection signal of being derived about each image-region between above-mentioned object image area and the above-mentioned ambient image regions.
(remarks 7)
According to each the described visual processing unit in the remarks 3~5,
Above-mentioned selection signal uses the gamma characteristic between above-mentioned object image area and the above-mentioned ambient image regions, based on being derived as the gamma characteristic characteristic quantity of the characteristic quantity of deriving.
(remarks 8)
According to each the described visual processing unit in the remarks 3~7,
Above-mentioned gray scale processing mechanism is to use the selected above-mentioned greyscale transformation form of above-mentioned selection signal, carries out the gray scale processing execution mechanism of the gray scale processing of above-mentioned object image area; With the mechanism that the gray scale of the above-mentioned picture signal after being handled by above-mentioned gray scale is proofreaied and correct, it has:
Aligning gear, its based at the image-region of the object pixel that comprises the object that becomes correction and comprise above-mentioned object pixel above-mentioned image-region the contiguous image zone and selecteed above-mentioned gray scale is handled form, the gray scale of above-mentioned object pixel is proofreaied and correct.
(remarks 9)
According to each the described visual processing unit in the remarks 3~7,
Above-mentioned gray scale processing mechanism has: aligning gear, and it is proofreaied and correct above-mentioned selection signal, will be used for handling by each pixel selection gray scale of above-mentioned picture signal the correction selection signal derivation of form; With gray scale processing execution mechanism, it uses above-mentioned correction to select the selected above-mentioned greyscale transformation form of signal, carries out the gray scale of above-mentioned picture signal and handles.
(remarks 10)
A kind of visual processing method, it possesses:
The image region segmentation step, the picture signal that will be transfused to is divided into a plurality of image-regions;
The greyscale transformation characteristic derives step, it is a kind of step that each above-mentioned image-region is derived the greyscale transformation characteristic, its use becomes the gamma characteristic between the ambient image regions of the object image area of derived object of above-mentioned greyscale transformation characteristic and above-mentioned object image area, and the above-mentioned greyscale transformation characteristic of above-mentioned object image area is derived; With
The gray scale treatment step based on the above-mentioned greyscale transformation characteristic that is derived, carries out the gray scale of above-mentioned picture signal and handles.
(remarks 11)
According to above-mentioned remarks 10 described visual processing methods,
Above-mentioned greyscale transformation characteristic is a gray-scale transformation curve
Above-mentioned greyscale transformation characteristic derives step, has: the histogram making step, use above-mentioned gamma characteristic, and make histogram; With the grey scale curve making step,, make above-mentioned gray-scale transformation curve based on the above-mentioned histogram of made.
(remarks 12)
Above-mentioned greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that above-mentioned picture signal carried out the gray scale processing 1 greyscale transformation form,
Above-mentioned gray scale treatment step is to use the selected above-mentioned greyscale transformation form of above-mentioned selection signal, carries out the gray scale processing execution step of the gray scale processing of above-mentioned object image area; Step with the gray scale of the above-mentioned picture signal after being handled by above-mentioned gray scale is proofreaied and correct has:
Aligning step, based at the image-region of the object pixel that comprises the object that becomes correction and comprise above-mentioned object pixel above-mentioned image-region the contiguous image zone and selecteed above-mentioned gray scale is handled form, the gray scale of above-mentioned object pixel is proofreaied and correct.
(remarks 13)
Visual processing method according to claim 10,
Above-mentioned greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that above-mentioned picture signal carried out the gray scale processing 1 greyscale transformation form,
Above-mentioned gray scale treatment step has: aligning step, above-mentioned selection signal is proofreaied and correct, and will be used for handling the correction selection signal derivation of form by each pixel selection gray scale of above-mentioned picture signal; With gray scale processing execution step, use above-mentioned correction to select the selected above-mentioned greyscale transformation form of signal, carry out the gray scale of above-mentioned picture signal and handle.
(remarks 14)
A kind of visual handling procedure, by the visual processing method of computer execution following steps,
Visual processing method possesses:
The image region segmentation step, the picture signal that will be transfused to is divided into a plurality of image-regions;
The greyscale transformation characteristic derives step, be to derive the step in greyscale transformation zone by each above-mentioned image-region, use becomes the object image area and the gamma characteristic that becomes the ambient image regions of above-mentioned object image area of the derived object of above-mentioned greyscale transformation characteristic, with the above-mentioned greyscale transformation characteristic derivation of above-mentioned object image area;
The gray scale treatment step based on the above-mentioned greyscale transformation characteristic that is derived, carries out the gray scale of above-mentioned picture signal and handles.
(remarks 15)
According to remarks 14 described visual handling procedures,
Above-mentioned greyscale transformation characteristic is a gray-scale transformation curve,
Above-mentioned greyscale transformation characteristic derives step, has: the histogram making step, use above-mentioned gamma characteristic to make histogram; With the grey scale curve making step, make above-mentioned gray-scale transformation curve based on the above-mentioned histogram of made.
(remarks 16)
According to the visual handling procedure shown in the remarks 14,
Above-mentioned greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that above-mentioned picture signal carried out the gray scale processing 1 greyscale transformation form,
Above-mentioned gray scale treatment step is to use the selected above-mentioned greyscale transformation form of above-mentioned selection signal, carries out the gray scale processing execution step of the gray scale processing of above-mentioned object image area; Step with the gray scale of the above-mentioned picture signal after being handled by above-mentioned gray scale is proofreaied and correct has:
Aligning step, based at the image-region of the object pixel that comprises the object that becomes correction and comprise above-mentioned object pixel above-mentioned image-region the contiguous image zone and selecteed above-mentioned gray scale is handled form, the gray scale of above-mentioned object pixel is proofreaied and correct.
(remarks 17)
Above-mentioned greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that above-mentioned picture signal carried out the gray scale processing 1 greyscale transformation form,
Above-mentioned gray scale treatment step has: aligning step, above-mentioned selection signal is proofreaied and correct, and will be used for handling the correction selection signal derivation of form by each pixel selection gray scale of above-mentioned picture signal; With gray scale processing execution step, use above-mentioned correction to select the selected above-mentioned greyscale transformation form of signal, carry out the gray scale of above-mentioned picture signal and handle.
(remarks 18)
According to remarks 1 described visual processing unit,
Above-mentioned gray scale processing mechanism has the parameter output mechanism, and this parameter output mechanism will be used for above-mentioned picture signal is carried out the parameter of curve of the gray-scale transformation curve of gray scale processing, based on above-mentioned greyscale transformation characteristic output,
Use is determined and the determined above-mentioned gray-scale transformation curve of above-mentioned parameter of curve based on above-mentioned greyscale transformation, above-mentioned picture signal is carried out gray scale handle.
(remarks 19)
Remarks 18 described visual processing unit,
The above-mentioned parameter output mechanism is a question blank of preserving the relation between above-mentioned greyscale transformation characteristic and the above-mentioned parameter of curve.
(remarks 20)
Remarks 18 or 19 described visual processing unit
Above-mentioned parameter of curve comprises the value of the picture signal after being handled by above-mentioned gray scale of setting correspondence of above-mentioned picture signal.
(remarks 21)
According to each the visual processing unit in the above-mentioned parameter of curve 18~21,
Above-mentioned parameter of curve comprises the slope of the above-mentioned gray-scale transformation curve of above-mentioned picture signal in the regulation interval.
(remarks 22)
According to each the visual processing unit in the remarks 18~21,
Above-mentioned parameter of curve comprises the coordinate of at least 1 point that above-mentioned gray-scale transformation curve passes through.
(remarks 23)
A kind of visual processing unit possesses:
Spatial manipulation mechanism, be a kind ofly in the picture signal that is transfused to, to carry out spatial manipulation by each of a plurality of image-regions, mechanism with the derivation of spatial manipulation signal, in above-mentioned spatial manipulation, use is carried out the weighted average of the gamma characteristic between above-mentioned object image area and the above-mentioned ambient image regions based on the weighting of the difference of the gamma characteristic between the ambient image regions of the object image area of the object that becomes above-mentioned spatial manipulation and above-mentioned object image area; With
The vision processor structure, it carries out the visual processing of above-mentioned object image area based on the gamma characteristic and the above-mentioned spatial manipulation signal of above-mentioned object image area.
(remarks 24)
According to remarks 23 described visual processing unit,
When the absolute value of the difference of above-mentioned gamma characteristic was big more, above-mentioned weighting was more little.
(remarks 25)
When the distance between above-mentioned object image area and above-mentioned ambient image regions was big more, above-mentioned weighting was more little.
(remarks 26)
According to each the visual processing unit in the remarks 23~25,
Above-mentioned image-region is made of a plurality of pixels,
Gamma characteristic between above-mentioned object image area and the above-mentioned ambient image regions is determined the characteristic quantity as the pixel value that constitutes each image-region.
(explanations of the 1st remarks)
Remarks 1 described visual processing unit possesses: image region segmentation mechanism, greyscale transformation characteristic export agency and gray scale processing mechanism.Image region segmentation mechanism, the picture signal that will be transfused to is divided into a plurality of image-regions.Greyscale transformation characteristic export agency, it is a kind of mechanism of the greyscale transformation characteristic being derived by each image-region, use becomes the gamma characteristic between the ambient image regions of the object image area of derived object of greyscale transformation characteristic and object image area, and the greyscale transformation characteristic of object image area is derived.The gray scale processing mechanism based on the greyscale transformation characteristic that is derived, carries out the gray scale of picture signal and handles.
At this, so-called greyscale transformation characteristic is the characteristic of the gray scale processing of each image-region.So-called gamma characteristic is meant the pixel value of the brightness, lightness etc. of each pixel for example.
In visual processing unit of the present invention, when the greyscale transformation characteristic of judging each image-region, not only the gamma characteristic of each image-region also uses the image-region that comprises periphery to judge in the gamma characteristic of the image-region of interior wide area.Therefore, handle the effect that adds spatial manipulation can for the gray scale of each image-region, can realize the gray scale processing that visual effect is further improved.
Remarks 2 described visual processing unit, according to remarks 1 described visual processing unit, the greyscale transformation characteristic is a gray-scale transformation curve.Greyscale transformation characteristic export agency has: histogram is made mechanism, and it uses gamma characteristic to make histogram; Make mechanism with grey scale curve, its histogram based on made is made gray-scale transformation curve.
At this, so-called histogram is for example distribution of the gamma characteristic correspondence of the pixel that comprises of object image area and ambient image regions.Grey scale curve is made mechanism, with for example with the summation curve of histogrammic value accumulative total as gray-scale transformation curve.
In visual processing unit of the present invention, when making histogram, not only the gamma characteristic of each image-region also can use the image-region that comprises periphery to carry out histogrammic making in the gamma characteristic of interior wide area.Therefore, the quantity of cutting apart of picture signal is increased, the size of image-region is diminished, can suppress the generation of blurred contour in the gray scale processing.Can prevent the showy not nature in border of image-region in addition.
Remarks 3 described visual processing unit, according to remarks 1 described visual processing unit,
The greyscale transformation characteristic is to be used for selecting the selection signal of 1 greyscale transformation form from picture signal being carried out a plurality of greyscale transformation forms that gray scale is handled.The gray scale processing mechanism has a plurality of greyscale transformation forms as 2 dimension LUT.
At this, so-called greyscale transformation form is for example to store the question blank (LUT) etc. that pixel value for picture signal carries out the pixel value of the picture signal after gray scale is handled.
Select signal, have for example from cutting apart the value of selecting to each value of a plurality of greyscale transformation forms of distributing to 1 greyscale transformation form.The gray scale processing mechanism, from the pixel value of the value of selecting signal and picture signal, with reference to 2 dimension LUT, the pixel value that will be performed the picture signal after gray scale is handled is exported.
In visual processing unit of the present invention, carry out gray scale with reference to 2 dimension LUT and handle.Therefore, can make gray scale handle high speed.And, owing to from a plurality of greyscale transformation forms, select 1 greyscale transformation form, carry out gray scale and handle, therefore can carry out suitable gray scale and handle.
Visual processing unit shown in the remarks 4, according to remarks 3 described visual processing unit, 2 dimension LUT are in all values of picture signal, with dull the increasing or the dull in store a plurality of greyscale transformation forms of order that reduce of value of the picture signal after being handled by gray scale of the value correspondence of selecting signal.
In visual processing unit of the present invention, for example select the degree of the value representation greyscale transformation of signal.
Visual processing unit shown in the remarks 5, according to remarks 3 or 4 described visual processing unit, 2 dimension LUT change by the login of description document data.
At this, so-called description document data are the data that are kept among the 2 dimension LUT, and the pixel value of the picture signal after for example will being handled by gray scale is as key element.
In visual processing unit of the present invention, by 2 dimension LUT are changed, thereby do not change the formation of hardware, just can carry out various changes to the characteristic that gray scale is handled.
Remarks 6 described visual processing unit are according to each the described visual processing unit in the remarks 3~5, select the value of signal, characteristic quantity as independent selection signal is derived, signal institute should be selected, as the selection signal of being derived separately at each image-region between object image area and the ambient image regions.
At this, the so-called indivedual characteristic quantities of selecting signals are mean value (simple average or weighted average), maximum or the minimum value etc. of the selection signal of for example being derived at each image-region.
In visual processing unit of the present invention,, derive at the characteristic quantity of the selection signal of the image-region correspondence of interior wide area as comprising ambient image regions with the selection signal of object image area correspondence.Therefore, can add, can prevent the showy not nature in border of image-region about selecting the spatial manipulation effect of signal.
Remarks 7 described visual processing unit, according to each the described visual processing unit in the remarks 3~5, select signal, the gamma characteristic characteristic quantity that is based on the characteristic quantity of being derived as the gamma characteristic of using between object image area and the ambient image regions is derived.
At this, so-called gamma characteristic characteristic quantity is mean value (simple average or weighted average), maximum or the minimum value etc. of for example gamma characteristic of the wide area between object image area and the ambient image regions.
In visual processing unit of the present invention,, the selection signal of object image area correspondence is derived based on the gamma characteristic characteristic quantity of the image-region correspondence of the wide area that comprises ambient image regions.Therefore, can add, can prevent the showy not nature in border of image-region about selecting the spatial manipulation effect of signal.
Remarks 8 described visual processing unit are that the gray scale processing mechanism has gray scale processing execution mechanism and aligning gear according to each the described visual processing unit in the remarks 3~7.Gray scale processing execution mechanism uses and selects the selected greyscale transformation form of signal, carries out the gray scale of object image area and handles.Aligning gear, it is the mechanism that the gray scale of the picture signal after being handled by gray scale is proofreaied and correct, handle form with comprising object pixel in the selecteed gray scale of the contiguous image zone of interior image-region institute at interior image-region based on object pixel, the gray scale of object images is proofreaied and correct at the object that becomes correction.
At this, so-called contiguous image zone can be and the identical image-region of ambient image regions when the greyscale transformation characteristic is derived also can be different image-regions.For example, the contiguous image zone is in the image-region of selected conduct and the image-region adjacency that comprises object pixel, apart from 3 short image-regions of the distance of object pixel.
Aligning gear for example, by each object image area, is proofreaied and correct the gray scale of using the picture signal after identical greyscale transformation form is handled by gray scale.The correction of object pixel for example according to the position of object pixel, is carried out according to manifesting at the influence of selected each greyscale transformation form in contiguous image zone.
In visual processing unit of the present invention, can proofread and correct the gray scale of picture signal by each pixel.Therefore, prevent the showy not nature in border of image-region, visual effect is further improved.
Remarks 9 described visual processing unit are according to each the described visual processing unit in the remarks 3~7, the gray scale processing mechanism has aligning gear and gray scale processing execution mechanism, aligning gear, to selecting signal to proofread and correct, will be used for handling the correction selection signal derivation of form by each pixel selection gray scale of picture signal.Gray scale processing execution mechanism uses to proofread and correct and selects the selected greyscale transformation form of signal, and the gray scale of carries out image signal is handled.
Aligning gear for example based at the picture position and the selection signal of being derived with the image-region of object image area adjacency, is proofreaied and correct the selection signal of being derived by each object image area, and the selection signal of each pixel is derived.
In visual processing unit of the present invention, can will select signal to derive by each pixel.Therefore, further prevent the showy not nature in border of image-region, visual effect is improved.
Remarks 10 described visual processing methods possess: image region segmentation step, greyscale transformation characteristic derive step, gray scale treatment step.The image region segmentation step, the picture signal that will be transfused to is divided into a plurality of image-regions.The greyscale transformation characteristic derives step, it is the step that gray scale is derived conversion characteristics by each image-region, use become the derived object of greyscale transformation characteristic object image area, and the ambient image regions of object image area between gamma characteristic, the greyscale transformation characteristic of object image area is derived.The gray scale treatment step based on the greyscale transformation characteristic that is derived, carries out the gray scale of picture signal and handles.
At this, so-called greyscale transformation characteristic is the characteristic of the gray scale processing of each image-region.So-called gamma characteristic is such pixel values such as brightness, lightness of for example each pixel.
In visual processing method of the present invention, when the greyscale transformation characteristic of judging each image-region, not only the gamma characteristic of each image-region also can use the image-region that comprises periphery to judge in the gamma characteristic of the image-region of interior wide area.Therefore, handle for the gray scale of each image-region, add the spatial manipulation effect, and then can realize the gray scale processing that visual effect is higher.
Remarks 11 described visual processing methods are that the greyscale transformation characteristic is a gray-scale transformation curve according to remarks 10 described visual processing methods.The greyscale transformation characteristic derives step, has: the histogram making step, use gamma characteristic to make histogram; With the grey scale curve making step, make gray-scale transformation curve based on the histogram of made.
At this, so-called histogram is for example distribution of the gamma characteristic correspondence of the pixel that comprises of object image area and ambient image regions.Grey scale curve is made mechanism, with for example with the summation curve of histogrammic value accumulative total as gray-scale transformation curve.
In visual processing unit of the present invention, when making histogram, not only the gamma characteristic of each image-region also can use the image-region that comprises periphery to carry out histogrammic making in the gamma characteristic of interior wide area.Therefore, the quantity of cutting apart of picture signal is increased, the size of image-region is diminished, can suppress the generation of blurred contour in the gray scale processing.Can prevent the showy not nature in border of image-region in addition.
Remarks 12 described visual processing methods are according to remarks 10 described visual processing methods, and the greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that picture signal carried out the gray scale processing 1 greyscale transformation form.And the gray scale treatment step has gray scale processing execution step and aligning step.Gray scale processing execution step is used and is selected the selected greyscale transformation form of signal, carries out the gray scale of object image area and handles.Aligning step, it is the step that the gray scale of the picture signal after being handled by gray scale is proofreaied and correct, based at the image-region of the object pixel of the object that becomes correction, handle form, the gray scale of object pixel is proofreaied and correct with the contiguous image zone that comprises the image-region of object pixel, the selecteed gray scale of institute.
At this, so-called greyscale transformation form is for example to store the question blank (LUT) etc. that pixel value for picture signal carries out the pixel value of the picture signal after gray scale is handled.So-called contiguous image zone, can be with the greyscale transformation characteristic is derived when the identical image-region of ambient image regions, also can be different image-regions.For example, the contiguous image zone, selected conduct from the image-region of the image-region adjacency that comprises object pixel, apart from 3 short image-regions of the distance of object pixel.
Select signal, for example have from distribute to each value of a plurality of greyscale transformation forms the selected value of distributing to 1 greyscale transformation form.The gray scale treatment step, from the pixel value of the value of selecting signal and picture signal, the pixel value of the picture signal after will being handled by gray scale with reference to LUT is exported.Aligning step by each object image area, is proofreaied and correct the gray scale of using the picture signal after identical greyscale transformation form is handled by gray scale.The correction of object pixel is for example according to the position of object pixel, carries out according to the influence that manifests at selected each greyscale transformation form in contiguous image zone.
In visual processing method of the present invention, carry out gray scale with reference to LUT and handle.Therefore, the high speed gray scale is handled.And, handle owing to from a plurality of greyscale transformation forms, select 1 greyscale transformation form to carry out gray scale, therefore can carry out suitable gray scale and handle.And then, can proofread and correct the gray scale of picture signal by each pixel.Therefore, prevent the showy not nature in border of image-region, visual effect is improved.
Remarks 13 described visual processing methods are according to remarks 10 described visual processing methods, and the greyscale transformation characteristic is to be used for selecting the selection signal of 1 greyscale transformation form from picture signal being carried out a plurality of greyscale transformation forms that gray scale is handled.And the gray scale treatment step has aligning step and gray scale processing execution step.Aligning step is proofreaied and correct selecting signal, will be used for handling by each pixel selection gray scale of picture signal the correction selection signal derivation of form.Gray scale processing execution step is used to proofread and correct and is selected the selected greyscale transformation form of signal, carries out the gray scale of object image area and handles.
At this, so-called greyscale transformation form is for example to store the question blank (LUT) etc. that pixel value for picture signal carries out the pixel value of the picture signal after gray scale is handled.
Select signal, have for example from cutting apart the value of selecting to each value of a plurality of greyscale transformation forms of distributing to 1 greyscale transformation form.The gray scale treatment step, from the pixel value of the value of selecting signal and picture signal, the pixel value that will be performed the picture signal after gray scale is handled with reference to 2 dimension LUT is exported.Aligning step for example based at the picture position and the selection signal of being derived with the image-region of object image area adjacency, is proofreaied and correct the selection signal of being derived by each object image area, and the selection signal of each pixel is derived.
In visual processing unit of the present invention, carry out gray scale with reference to LUT and handle.Therefore, can make gray scale handle high speed.And, owing to from a plurality of greyscale transformation forms, select 1 greyscale transformation form, carry out gray scale and handle, therefore can carry out suitable gray scale and handle.And then, can will select signal to derive by each pixel.Therefore, further prevent the showy not nature in border of image-region, visual effect is improved.
Remarks 14 described visual handling procedures, be to carry out by computer to possess: image region segmentation step, greyscale transformation characteristic derive the visual processing method of step and gray scale treatment step.The image region segmentation step, the picture signal that will be transfused to is divided into a plurality of image-regions.The greyscale transformation characteristic derives step, it is the step that derives the greyscale transformation characteristic by each image-region, use becomes the gamma characteristic between the ambient image regions of the object image area of derived object of greyscale transformation characteristic and object image area, and the greyscale transformation characteristic of object image area is derived.The gray scale treatment step based on the greyscale transformation characteristic that is derived, carries out the gray scale of picture signal and handles.
At this, so-called greyscale transformation characteristic is the characteristic of the gray scale processing of each image-region.Gamma characteristic is the brightness of for example each pixel, the pixel value of brightness.
In visual handling procedure of the present invention, when the greyscale transformation characteristic of judging each image-region, not only the gamma characteristic of each image-region also can use the image-region that comprises periphery to judge in the gamma characteristic of the image-region of interior wide area.Therefore, handle the effect that adds spatial manipulation can for the gray scale of each image-region, can realize the gray scale processing that further visual effect is high.
Remarks 15 described visual handling procedures are that the greyscale transformation characteristic is a gray-scale transformation curve according to remarks 14 described visual handling procedures.The greyscale transformation characteristic derives step, has: the histogram making step, use gamma characteristic to make histogram; With the grey scale curve making step, make gray-scale transformation curve based on the histogram of made.
At this, so-called histogram is to comprise object image area and the ambient image regions part in the gamma characteristic correspondence of interior pixel.The grey scale curve making step is for example with the gray-scale transformation curve of the summation curve of histogrammic value accumulative total.
In visual handling procedure of the present invention, when making histogram, not only the gamma characteristic of each image-region also can be used to comprise the gamma characteristic of peripheral image-region at interior wide area, carries out histogrammic making.Therefore, the quantity of cutting apart of picture signal is increased, the size of image-region is diminished, can suppress to handle the generation of the blurred contour that causes because of gray scale.And, can prevent that the border of image-region is showy unnatural.
Remarks 16 described visual handling procedures are according to remarks 14 described visual handling procedures, and the greyscale transformation characteristic is the selection signal that is used for selecting from a plurality of greyscale transformation forms that picture signal carried out the gray scale processing 1 greyscale transformation form.And the gray scale treatment step has gray scale processing execution step and aligning step.Gray scale processing execution step is used and is selected the selected greyscale transformation form of signal, carries out the gray scale of object image area and handles.Aligning step, it is the step that the gray scale of the picture signal handled by gray scale is proofreaied and correct, based at the image-region of the object pixel that comprises the object that becomes correction and comprise the contiguous image zone of the image-region of object pixel, selected gray scale is handled form, the gray scale of object pixel is proofreaied and correct.
At this, so-called greyscale transformation form is for example to store the question blank (LUT) etc. that pixel value for picture signal carries out the pixel value of the picture signal after gray scale is handled.So-called contiguous image zone, can be with the greyscale transformation characteristic is derived when the identical image-region of ambient image regions, also can be different image-regions.For example, the contiguous image zone, selected conduct from the image-region of the image-region adjacency that comprises object pixel, apart from 3 short image-regions of the distance of object pixel.
Select signal, for example have from distribute to each value of a plurality of greyscale transformation forms the selected value of distributing to 1 greyscale transformation form.The gray scale treatment step, from the pixel value of the value of selecting signal and picture signal, with reference to LUT, the pixel value of the picture signal after will being handled by gray scale is exported.Aligning step for example by each object image area, is proofreaied and correct the gray scale of using the picture signal after identical greyscale transformation form is handled by gray scale.The correction of object pixel is for example according to the position of object pixel, carries out according to the influence that manifests at selected each greyscale transformation form in contiguous image zone.
In visual processing method of the present invention, carry out gray scale with reference to LUT and handle.Therefore, the high speed gray scale is handled.And, handle owing to from a plurality of greyscale transformation forms, select 1 greyscale transformation form to carry out gray scale, therefore can carry out suitable gray scale and handle.And then, can proofread and correct the gray scale of picture signal by each pixel.Therefore, prevent the showy not nature in border of image-region, visual effect is improved.
Remarks 17 described visual handling procedures are according to remarks 14 described visual handling procedures, and the greyscale transformation characteristic is to be used for selecting the selection signal of 1 greyscale transformation form from picture signal being carried out a plurality of greyscale transformation forms that gray scale is handled.And the gray scale treatment step has aligning step and gray scale processing execution step.Aligning step is proofreaied and correct selecting signal, will be used for handling by each pixel selection gray scale of picture signal the correction selection signal derivation of form.Gray scale processing execution step is used to proofread and correct and is selected the selected greyscale transformation form of signal, carries out the gray scale of object image area and handles.
At this, so-called greyscale transformation form is for example to store the question blank (LUT) etc. that pixel value for picture signal carries out the pixel value of the picture signal after gray scale is handled.
Select signal, have for example from cutting apart the value of selecting to each value of a plurality of greyscale transformation forms of distributing to 1 greyscale transformation form.The gray scale treatment step, from the pixel value of the value of selecting signal and picture signal, with reference to 2 dimension LUT, the pixel value that will be performed the picture signal after gray scale is handled is exported.Aligning step for example based at the picture position and the selection signal of being derived with the image-region of object image area adjacency, is proofreaied and correct the selection signal of being derived by each object image area, and the selection signal of each pixel is derived.
In visual processing unit of the present invention, carry out gray scale with reference to LUT and handle.Therefore, can make gray scale handle high speed.And, owing to from a plurality of greyscale transformation forms, select 1 greyscale transformation form, carry out gray scale and handle, therefore can carry out suitable gray scale and handle.And then, can will select signal to derive by each pixel.Therefore, further prevent the showy not nature in border of image-region, visual effect is improved.
Remarks 18 described visual processing unit are according to remarks 1 described visual processing unit, the gray scale processing mechanism, have the parameter output mechanism, will be used for above-mentioned picture signal is carried out the parameter of curve of the gray-scale transformation curve of gray scale processing, based on above-mentioned greyscale transformation characteristic output.The gray scale processing mechanism, it uses based on greyscale transformation determines and the determined above-mentioned gray-scale transformation curve of parameter of curve, above-mentioned picture signal is carried out gray scale handle.
At this, so-called gray-scale transformation curve also comprises a part at least and is the part of straight line.So-called parameter of curve is the parameter that is used for gray-scale transformation curve and the difference of other gray-scale transformation curve.For example, the slope of the coordinate on the gray-scale transformation curve, gray-scale transformation curve, curvature etc.The parameter output mechanism for example is, question blank, and it preserves the parameter of curve of greyscale transformation characteristic correspondence; Exclusive disjunction mechanisms etc., parameter of curve is obtained in the computing of the curve approximation that it is undertaken by the parameter of curve of using the greyscale transformation characteristic correspondence of stipulating etc.
In visual processing unit of the present invention,, picture signal is carried out gray scale handle according to the greyscale transformation characteristic.Therefore, can carry out gray scale more suitably handles.And, do not need to store in advance the value of all gray-scale transformation curves that gray scale uses in handling, according to the parameter of curve that is output, gray-scale transformation curve is determined, carry out gray scale and handle.Therefore, can cut down the memory capacity that is used to store gray-scale transformation curve.
Remarks 19 described visual processing unit are according to remarks 18 described visual processing unit, and the parameter output mechanism is a question blank of preserving the relation between greyscale transformation characteristic and the parameter of curve.
Question blank, the relation between in store greyscale transformation characteristic and the parameter of curve.The gray scale processing mechanism uses the gray-scale transformation curve that is determined, and picture signal is carried out gray scale handle.
In visual processing unit of the present invention,, picture signal is carried out gray scale handle according to the greyscale transformation characteristic.Therefore, can carry out gray scale more suitably handles.And then, do not need to store in advance the value of employed all gray-scale transformation curves, only store parameter of curve.Therefore, can cut down the memory capacity that is used to store gray-scale transformation curve.
Remarks 20 described visual processing unit are according to remarks 18 or 19 described visual processing unit, and parameter of curve comprises the value of the picture signal of being handled by gray scale of the setting correspondence of picture signal.
In the gray scale processing mechanism, relation between the value of the picture signal of the setting of use picture signal and the object of visual processing, the value of the picture signal of being handled by gray scale that parameter of curve is comprised, in be divided into non-linearly or linear, the value of the picture signal that will be handled by gray scale derives.
At visual processing unit of the present invention, according to the value of the picture signal of being handled by gray scale of the setting correspondence of picture signal, gray-scale transformation curve is determined, can carry out gray scale and handle.
Remarks 21 described visual processing unit are according to each the described visual processing unit in the remarks 18~20, and parameter of curve comprises the slope of the gray-scale transformation curve of picture signal in the regulation interval.
In the gray scale processing mechanism,, gray-scale transformation curve is determined according to the slope of the gray-scale transformation curve of picture signal in the regulation interval.And then, use determined gray-scale transformation curve, the value of the picture signal of being handled by gray scale of the value correspondence of picture signal is derived.
In visual processing unit of the present invention, by the slope of the gray-scale transformation curve of picture signal in the regulation interval, gray-scale transformation curve is determined, can carry out gray scale and handle.
Remarks 22 described visual processing unit are according to each the described visual processing unit in the remarks 18~21, and parameter of curve comprises the coordinate of at least 1 point that gray-scale transformation curve passes through.
In parameter of curve, the coordinate of at least 1 point that gray-scale transformation curve is passed through is determined.The value of the picture signal after promptly the gray scale of the value correspondence of picture signal being handled is determined 1 point at least.In the gray scale processing mechanism, use determined picture signal value, and become relation between the value of picture signal of object of visual processing, value by the picture signal after determined gray scale is handled, in be divided into non-linear or linear, thereby the value of the picture signal after will being handled by gray scale derives.
In visual processing unit of the present invention, by at least 1 point that gray-scale transformation curve passes through, gray-scale transformation curve is determined, can carry out gray scale and handle.
Remarks 23 described visual processing unit possess: spatial manipulation mechanism and vision processor structure.Spatial manipulation mechanism, be a kind ofly in the picture signal that is transfused to, to carry out spatial manipulation by each of a plurality of image-regions, mechanism with the derivation of spatial manipulation signal, in spatial manipulation, use is carried out the weighted average of the gamma characteristic between object image area and the above-mentioned ambient image regions based on the weighting of the difference of the gamma characteristic between the ambient image regions of the object image area of the object that becomes spatial manipulation and object image area.The vision processor structure, it carries out the visual processing of above-mentioned object image area based on the gamma characteristic and the above-mentioned spatial manipulation signal of object image area.
At this, so-called image-region is meant the zone or the pixel that comprise a plurality of pixels in image.So-called gamma characteristic is meant the value of the pixel that brightness based on each pixel, lightness etc. are such.For example, the gamma characteristic of so-called image-region is meant mean value (simple average or weighted average), maximum or the minimum value etc. of the pixel value of the pixel that image-region comprises.
Spatial manipulation mechanism uses the gamma characteristic of ambient image regions, carries out the spatial manipulation of object image area.In spatial manipulation, with the gamma characteristic weighted average between object image area and the ambient image regions.Power in weighted average is based on the difference of the gamma characteristic between object image area and the ambient image regions and set.
In visual processing unit of the present invention, in the spatial manipulation signal, can suppress the influence that is subjected to because of image-region that gamma characteristic differs widely.For example, ambient image regions is the image that comprises the border of object, under the situation that the gamma characteristic of object image area differs widely, also suitable spatial manipulation signal can be derived.Its result is in the visual processing of usage space processing signals, also can especially suppress the generation of blurred contour etc.Therefore, can realize making the visual processing of visual effect raising.
Remarks 24 described visual processing unit are that the absolute value of the difference of gamma characteristic is big more according to remarks 23 described visual processing unit, and weighting is more little.
At this, power can be that the difference according to gamma characteristic is endowed the value that reduces as dull, also can be the comparison between the difference of threshold value by regulation and gamma characteristic, is set to the value of regulation.
In visual processing unit of the present invention, in the spatial manipulation signal, can suppress the influence that is subjected to because of image-region that gamma characteristic differs widely.For example, ambient image regions is the image that comprises the border etc. of object, under the situation that the gamma characteristic of object image area differs widely, also suitable spatial manipulation signal can be derived.Its result is, in the visual processing of usage space processing signals, also can suppress the generation of blurred contour etc. especially.Therefore, can realize making the visual processing of visual effect raising.
Remarks 25 described visual processing unit are that the distance between object image area and the ambient image regions is big more, Quan Yuexiao according to remarks 23 or 24 described visual processing unit.
At this, power also can be endowed as according to the size of the distance between object image area and the ambient image regions, and the dull value that reduces, by and the threshold value of regulation between the comparison of size of distance, and be set to the value of regulation.
In visual processing unit of the present invention, in the spatial manipulation signal, can suppress the influence that is subjected to because of the ambient image regions of leaving with object image area.Therefore, ambient image regions, it is the image that comprises the border etc. of object, under the situation that the gamma characteristic of object image area differs widely, under the situation that ambient image regions and object image area are left, all can suppress because of the suffered influence of ambient image regions, more suitably the spatial manipulation signal is derived.
Remarks 26 described visual processing unit are that image-region is made of a plurality of pixels according to each the described visual processing unit in the remarks 23~25.Gamma characteristic between object image area and the ambient image regions is used as the characteristic quantity of the pixel value that constitutes each image-region and determines.
At visual processing unit of the present invention, when the spatial manipulation of carrying out each image-region, the pixel that comprises in each image-region not only also can use the gamma characteristic of the pixel that the image-region that comprises periphery comprises in the image-region of interior wide area to handle.Therefore, can carry out more suitable spatial manipulation.Its result is, in the visual processing of ability spatial manipulation signal, also can suppress the generation of blurred contour etc. especially.Therefore, can realize making the visual processing of visual effect raising.
(remarks 2)
The present invention also can be according to following performance.In addition, in the remarks of the described subform of this column (" the 2nd remarks "), be subordinated to the described remarks of the 2nd remarks.
(contents of the 2nd remarks)
(remarks 1)
A kind of visual processing unit possesses:
The input signal processing mechanism, it carries out certain processing for the picture signal that is transfused to, and processing signals is exported;
The vision processor structure, it is based on providing the above-mentioned picture signal that is transfused to and above-mentioned processing signals and as by 2 dimension LUT of the relation between the output signal of the visual above-mentioned picture signal of having handled, with above-mentioned output signal output.
(remarks 2)
According to remarks 1 described visual processing unit,
In above-mentioned 2 dimension LUT, there are nonlinear relation in above-mentioned picture signal and above-mentioned output signal.
(remarks 3)
According to remarks 2 described visual processing unit,
Above-mentioned 2 the dimension LUT in, above-mentioned picture signal and above-mentioned processing signals both sides, and above-mentioned output signal between have nonlinear relation.
(remarks 4)
According to each the described visual processing unit in the remarks 1~3,
The value of each key element of above-mentioned 2 dimension LUT is based on and comprises computing that the value of calculating according to above-mentioned picture signal and above-mentioned processing signals is strengthened mathematical expression determined interior.
(remarks 5)
According to remarks 4 described visual processing unit,
Above-mentioned processing signals is for the picture signal between the neighboring pixel of concerned pixel and concerned pixel, carries out the signal of above-mentioned certain processing.
(remarks 6)
According to remarks 4 or 5 described visual processing unit,
The computing of above-mentioned reinforcement is non-linear function.
(remarks 7)
According to each the described visual processing unit in the remarks 4~6,
The computing of above-mentioned reinforcement is the reinforcement function that the difference of each transformed value after the conversion of stipulating for above-mentioned picture signal and above-mentioned processing signals is strengthened.
(remarks 8)
According to remarks 7 described visual processing unit,
The value C of each key element of above-mentioned 2 dimension LUT, be the inverse transform function F2 of the value B of the value A for above-mentioned picture signal, above-mentioned processing signals, above-mentioned transforming function transformation function F1, above-mentioned reinforcement function F 3, determine based on mathematical expression F2 (F1 (A))+F3 (F1 (A)-F1 (B)).
(remarks 9)
According to remarks 8 described visual processing unit,
Above-mentioned transforming function transformation function F1 is a logarithmic function.
(remarks 10)
According to remarks 8 described visual processing unit,
Above-mentioned inverse transform function F2 is the gamma correction function.
(remarks 11)
According to each the described visual processing unit in the remarks 4~6,
The computing of above-mentioned reinforcement is the reinforcement function that the ratio between above-mentioned picture signal and the above-mentioned processing signals is strengthened.
(remarks 12)
According to remarks 11 described visual processing unit,
The value C of each key element of above-mentioned 2 dimension LUT is value B, dynamic range compression function F 4, the above-mentioned reinforcement function F 5 of the value A for above-mentioned picture signal, above-mentioned processing signals, determines based on mathematical expression F4 (A) * F5 (A/B).
(remarks 13)
According to remarks 12 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is monotone increasing functions.
(remarks 14)
According to remarks 13 described visual processing unit,
Above-mentioned dynamic range key element function F 4 is the functions that raise up.
(remarks 15)
According to remarks 12 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is power functions.
(remarks 16)
According to remarks 12 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is direct proportion functions of proportionality coefficient 1.
(remarks 17)
According to each the described visual processing unit in the remarks 12~16,
Above-mentioned reinforcement function F 5 is power functions.
(remarks 18)
According to remarks 11 described visual processing unit,
Above-mentioned mathematical expression further comprises: for above-mentioned picture signal of being strengthened by above-mentioned reinforcement function and the ratio between the above-mentioned processing signals, carry out the computing of dynamic range compression.
(remarks 19)
According to each the described visual processing unit in the remarks 4~6,
The computing of above-mentioned reinforcement comprises the value according to above-mentioned picture signal, the function that the difference between above-mentioned picture signal and the above-mentioned processing signals is strengthened.
(remarks 20)
According to remarks 19 described visual processing unit,
The value C of each key element of above-mentioned 2 dimension LUT is that value B, the amount of reinforcement of the value A for above-mentioned picture signal, above-mentioned processing signals adjusted function F 6, dynamic range compression function F 8, determines based on mathematical expression F8 (A)+F6 (A) * F7 (A-B).
(remarks 21)
According to remarks 20 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is monotone increasing functions.
(remarks 22)
According to remarks 21 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is the functions that raise up.
(remarks 23)
According to remarks 20 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is power functions.
(remarks 24)
According to remarks 20 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is direct proportion functions of proportionality coefficient 1.
(remarks 25)
According to remarks 19 described visual processing unit,
Above-mentioned mathematical expression further comprises: for the value of strengthening by the computing of above-mentioned reinforcement, add the computing of above-mentioned picture signal being carried out the value after the dynamic range compression.
(remarks 26)
According to each the described visual processing unit in the remarks 4~6,
The computing of above-mentioned reinforcement is the reinforcement function that the difference between above-mentioned picture signal and the above-mentioned processing signals is strengthened,
Above-mentioned mathematical expression further comprises: for the value of being strengthened by above-mentioned reinforcement function, add that the value after the value of above-mentioned picture signal carries out the computing of gray correction.
(remarks 27)
According to remarks 26 described visual processing unit,
The value C of each key element of above-mentioned 2 dimension LUT is the value B of the value A for above-mentioned picture signal, above-mentioned processing signals, above-mentioned reinforcement function F 9, gray correction function F 10, determines based on mathematical expression F10 ((A) * F9 (A-B)).
(remarks 28)
According to each the described visual processing unit in the remarks 4~6,
The computing of above-mentioned reinforcement is the reinforcement function that the difference between above-mentioned picture signal and the above-mentioned processing signals is strengthened,
Above-mentioned mathematical expression further comprises: for the value of being strengthened by above-mentioned reinforcement function, add the computing of above-mentioned picture signal being carried out the value after the gray correction.
(remarks 29)
According to remarks 28 described visual processing unit,
The value C of each key element of above-mentioned 2 dimension LUT is the value B of the value A for above-mentioned picture signal, above-mentioned processing signals, above-mentioned reinforcement function F 11, gray correction function F 12, determines based on mathematical expression F12 (A)+F11 (A-B).
(remarks 30)
According to each the described visual processing unit in the remarks 1~29,
In above-mentioned 2 dimension LUT, for the value that the above-mentioned picture signal and the above-mentioned processing signals of identical value are preserved, there be dull increasing or the dull relation that reduces in the value of above-mentioned relatively picture signal and above-mentioned processing signals.
(remarks 31)
According to each the described visual processing unit in the remarks 1~3,
Above-mentioned 2 dimension LUT with the relation between above-mentioned picture signal and the above-mentioned output signal, preserve the gray-scale transformation curve group of forming as by many gray-scale transformation curves.
(remarks 32)
According to remarks 31 described visual processing unit,
Above-mentioned gray-scale transformation curve each, relatively the value of picture signal is dull increases.
(remarks 33)
According to remarks 31 or 32 described visual processing unit,
Above-mentioned processing signals is the signal that is used for selecting from above-mentioned a plurality of gray-scale transformation curve groups corresponding gray-scale transformation curve.
(remarks 34)
According to remarks 33 described visual processing unit,
The value of above-mentioned processing signals, 1 gray-scale transformation curve that comprises with above-mentioned many gray-scale transformation curve groups is associated.
(remarks 35)
According to each the described visual processing unit in the remarks 1~34,
In above-mentioned 2 dimension LUT, the description document data that login is made in advance by the computing of regulation.
(remarks 36)
According to remarks 35 described visual processing unit,
Above-mentioned 2 dimension LUT can change by the login of description document data.
(remarks 37)
According to remarks 35 or 36 described visual processing unit,
Also possess description document data entry mechanism, it is used for making the foregoing description file data to login at above-mentioned vision processor structure.
(remarks 38)
According to remarks 35 described visual processing unit,
Above-mentioned vision processor structure obtains the foregoing description file data by external device (ED) obtained.
(remarks 39)
According to remarks 38 described visual processing unit,
Pass through the foregoing description file data that obtained, can change above-mentioned 2 dimension LUT.
(remarks 40)
According to remarks 38 or 39 described visual processing unit,
Above-mentioned vision processor structure obtains the foregoing description file data by communication network.
(remarks 41)
According to remarks 35 described visual processing unit,
Also possess description document data creating mechanism, it makes the foregoing description file data.
(remarks 42)
According to the visual processing unit shown in the remarks 41,
The foregoing description file data is made mechanism, and it makes the foregoing description file data based on the histogram of the gamma characteristic of above-mentioned picture signal.
(remarks 43)
According to remarks 35 described visual processing unit,
The foregoing description file data of logining in above-mentioned 2 dimension LUT, condition according to the rules is carried out switching.
(remarks 44)
According to remarks 43 described visual processing unit,
The condition of so-called afore mentioned rules is the condition relevant with lightness.
(remarks 45)
According to remarks 44 described visual processing unit,
Above-mentioned brightness is the brightness of above-mentioned picture signal.
(remarks 46)
According to remarks 45 described visual processing unit,
Also possess lightness decision mechanism, its lightness to above-mentioned picture signal is judged.
The description document data of logining among the LUT in above-mentioned 2 dimensions are carried out switching according to the result of determination of above-mentioned lightness decision mechanism.
(remarks 47)
According to remarks 44 described visual processing unit,
Also possess the lightness input mechanism, it is transfused to the relevant condition of above-mentioned lightness,
The description document data of logining among the LUT in above-mentioned 2 dimensions are carried out switching according to the input results of above-mentioned lightness input mechanism.
(remarks 48)
According to remarks 47 described visual processing unit,
Above-mentioned lightness input mechanism, it is transfused to the lightness of the input environment of the lightness of output environment of above-mentioned output signal or above-mentioned input signal.
(remarks 49)
According to remarks 44 described visual processing unit,
Also possess lightness testing agency, its at least 2 kinds to above-mentioned brightness detect,
The description document data of logining among the LUT in above-mentioned 2 dimensions are carried out switching according to the testing result of above-mentioned lightness testing agency.
(remarks 50)
According to remarks 49 described visual processing unit
The above-mentioned lightness that above-mentioned lightness testing agency is detected comprises: the shading value of the shading value of the shading value of above-mentioned picture signal and the output environment of above-mentioned output signal or the input environment of above-mentioned input signal.
(remarks 51)
According to remarks 43 described visual processing unit,
Also possess: the selection mechanism of description document data, it carries out the selection of the foregoing description file data logined among the LUT in above-mentioned 2 dimensions,
The description document data of logining among the LUT in above-mentioned 2 dimensions are carried out switching according to the selection result of foregoing description file data selection mechanism.
(remarks 52)
According to remarks 51 described visual processing unit,
The selection mechanism of foregoing description file data is the input unit that is used to be described the selection of file.
(remarks 53)
According to remarks 43 described visual processing unit,
Also possess picture characteristics decision mechanism, its picture characteristics to above-mentioned picture signal judges,
The description document data of logining in above-mentioned 2 dimension LUT are switched according to the judged result of above-mentioned picture characteristics decision mechanism.
(remarks 54)
According to remarks 43 described visual processing unit,
Also possess User Recognition mechanism, it is discerned the user,
The description document data of logining among the LUT in above-mentioned 2 dimensions are switched according to the recognition result of User Recognition mechanism.
(remarks 55)
According to each the described visual processing unit in the remarks 1~54,
Above-mentioned vision processor structure, it carries out interpolation operation to the value that above-mentioned 2 dimension LUT preserve, with above-mentioned output signal output.
(remarks 56)
According to remarks 55 described visual processing unit,
Above-mentioned interpolation operation is based at least one side's of above-mentioned picture signal that 2 system numerical tables show or above-mentioned processing signals the linear interpolation of value of low level position.
(remarks 57)
According to each the described visual processing unit in the remarks 1~56,
Above-mentioned input signal processing mechanism carries out spatial manipulation for above-mentioned picture signal.
(remarks 58)
According to remarks 57 described visual processing unit,
Above-mentioned input signal processing mechanism, it generates unsharp signal according to above-mentioned picture signal.
(remarks 59)
According to remarks 57 or 58 described visual processing unit,
In above-mentioned spatial manipulation, the mean value of deduced image signal, maximum or minimum value.
(remarks 60)
According to each the described visual processing unit in the remarks 1~59,
Above-mentioned picture signal that above-mentioned vision processor structure, its use are transfused to and above-mentioned processing signals are carried out spatial manipulation and gray scale and are handled.
(remarks 61)
A kind of visual processing method, it possesses:
The input signal treatment step carries out certain processing for the picture signal that is transfused to, and processing signals is exported; With
Visual treatment step is based on giving the above-mentioned picture signal that is transfused to and above-mentioned processing signals and as by 2 dimension LUT of the relation between the output signal of the above-mentioned picture signal after the visual processing, with above-mentioned output signal output.
(remarks 62)
A kind of visual handling procedure, it is used for being undertaken by computer the visual processing method of following steps,
Above-mentioned visual processing method comprises:
The input signal treatment step carries out certain processing for the picture signal that is transfused to, and processing signals is exported; With
Visual treatment step is based on giving the above-mentioned picture signal that is transfused to and above-mentioned processing signals and as by 2 dimension LUT of the relation between the output signal of the above-mentioned picture signal after the visual processing, with above-mentioned output signal output.
(remarks 63)
A kind of integrated circuit, it comprises each the described visual processing unit in the remarks 1~60.
(remarks 64)
A kind of display unit, it possesses:
The described visual processing unit of in the remarks 1~60 each; With
Carry out the demonstration of the demonstration of the above-mentioned output signal exported from above-mentioned visual processing unit.
(remarks 65)
A kind of filming apparatus, it possesses:
Carry out the photographic unit of the shooting of image; With
Will be by the captured image of above-mentioned photographic unit, as above-mentioned picture signal, carry out each the described visual processing unit in the remarks 1~60 of visual processing.
(remarks 66)
A kind of portable information terminal, it possesses:
Data Receiving mechanism, the view data of its received communication or broadcasting;
With the above-mentioned view data that is received,, carry out each the described visual processing unit in the remarks 1~60 of visual processing as above-mentioned picture signal; With
Indication mechanism, it carries out being undertaken by above-mentioned visual processing unit the demonstration of the above-mentioned picture signal after the visual processing.
(remarks 67)
A kind of photographing information terminal, it possesses:
Photographic unit, it carries out the shooting of image;
Will be by the captured image of above-mentioned photographic unit, as above-mentioned picture signal, carry out each the described visual processing unit in the remarks 1~60 of visual processing; With
The data transmitter structure, its transmission is above-mentioned by the above-mentioned picture signal of visual processing.
(remarks 68)
A kind of image processing apparatus, the image processing of the received image signal that it is transfused to possesses:
Description document data creating mechanism, it makes the description document data of using in the image processing based on a plurality of description document data that are used to carry out different image processing;
With image processing actuator, it uses the foregoing description file data of making mechanism's made by the foregoing description file data, carries out above-mentioned image processing.
(remarks 69)
A kind of image processing apparatus, the image processing of the received image signal that it is transfused to possesses:
Description document information output mechanism, the description document information output that the description document data that will be used for that above-mentioned image processing is used are determined;
Image processing actuator, it uses based on the determined description document data of being exported from foregoing description fileinfo output mechanism of information, carries out above-mentioned image processing.
(remarks 70)
According to remarks 69 described image processing apparatus,
Foregoing description fileinfo output mechanism, it is above-mentioned by the display environment of the received image signal of image processing according to expression, and the foregoing description fileinfo is exported.
(remarks 71)
According to remarks 69 described image processing apparatus,
Foregoing description fileinfo output mechanism, it is exported the foregoing description fileinfo according to the information relevant with the description document data in the information that comprises in the above-mentioned received image signal.
(remarks 72)
According to remarks 69 described image processing apparatus,
Foregoing description fileinfo output mechanism, it is exported the foregoing description fileinfo according to the information relevant with the feature of the above-mentioned image processing that is obtained.
(remarks 73)
According to remarks 69 described image processing apparatus,
Foregoing description fileinfo output mechanism, it is exported the foregoing description fileinfo according to the information of the environmental correclation that generates above-mentioned received image signal.
(remarks 74)
According to remarks 69 described image processing apparatus,
Above-mentioned received image signal comprises the attribute information of view data and above-mentioned received image signal,
Foregoing description fileinfo output mechanism, it is exported the foregoing description fileinfo according to above-mentioned attribute information.
(remarks 75)
According to remarks 74 described image processing apparatus,
So-called above-mentioned attribute information comprises the relevant integrity attribute information of integral body of above-mentioned view data.
(remarks 76)
According to remarks 74 or 75 described image processing apparatus,
So-called above-mentioned attribute information comprises the relevant part attribute information of a part of above-mentioned view data.
(remarks 77)
According to remarks 74 described image processing apparatus,
So-called above-mentioned attribute information comprises the build environment attribute information with the environmental correclation that generates above-mentioned received image signal.
(remarks 78)
According to remarks 74 described image processing apparatus,
So-called above-mentioned attribute information comprises and the relevant media property information of medium that obtains above-mentioned received image signal.
(remarks 79)
According to each the described image processing apparatus in the remarks 68~78,
The foregoing description file data is 2 dimension LUT,
Above-mentioned image processing actuator comprises each the described visual processing unit in the remarks 1~60.
(remarks 80)
A kind of image processing apparatus, it possesses:
Image processing actuator, it carries out image processing to the received image signal that is transfused to;
Description document information output mechanism will be used for the description document information output that the description document data of the image processing of the received image signal that is suitable for being transfused to are determined;
(remarks 81)
A kind of integrated circuit, it comprises each the described image processing apparatus in the remarks 60~80.
(remarks 82)
A kind of display unit, it possesses:
The described image processing apparatus of in the remarks 68~80 each; With
Indication mechanism, it carries out being undertaken by above-mentioned image processing apparatus the demonstration of the above-mentioned received image signal of image processing.
(remarks 83)
A kind of filming apparatus possesses:
Photographic unit, it carries out the shooting of image; With
Will be by the captured image of above-mentioned photographic unit, as above-mentioned received image signal, carry out each the described image processing apparatus in the remarks 68~80 of image processing.
(remarks 84)
A kind of portable information terminal possesses:
Data Receiving mechanism, the view data of its received communication or broadcasting;
With the above-mentioned view data that is received,, carry out each the described image processing apparatus in the remarks 68~80 of image processing as above-mentioned received image signal; With
Indication mechanism, it carries out being undertaken by above-mentioned image processing apparatus the demonstration of the above-mentioned received image signal after the image processing.
(remarks 85)
A kind of portable information terminal, it possesses:
Photographic unit, it carries out the shooting of image;
Will be by the captured image of above-mentioned photographic unit, as above-mentioned received image signal, carry out each the described image processing apparatus in the remarks 68~80 of image processing; With
The data transmitter structure, its transmission is above-mentioned by the above-mentioned received image signal of image processing.
(explanations of the 2nd remarks)
Remarks 1 described visual processing unit possesses: input signal processing mechanism and vision processor structure.The input signal processing mechanism, it carries out certain processing for the picture signal that is transfused to, and processing signals is exported.The vision processor structure is based on giving the picture signal that is transfused to and processing signals and as by 2 dimension LUT of the relation between the output signal of the picture signal of visual processing, output signal being exported.
At this, so-called certain processing is for example for the direct or indirect processing of picture signal, comprises the processing that adds up conversion to the pixel value of the picture signal of spatial manipulation and gray scale processing etc.
In visual processing unit of the present invention, use and to have put down in writing picture signal and processing signals and, to have carried out visual processing by 2 dimension LUT of the relation between the output signal after the visual processing.Therefore, can realize not relying on 2 hardware of tieing up the function of LUT constitutes.That is, can realize that the hardware that does not rely on the visual processing that is realized as device integral body constitutes.
Remarks 2 described visual processing unit are according to remarks 1 described visual processing unit, in 2 dimension LUT, have nonlinear relation between picture signal and the output signal.
At this, there are nonlinear relation in so-called picture signal and output signal, by for example nonlinear function representation of the relative picture signal of value of each key element of 2 dimension LUT, perhaps are meant to be difficult to by function formulaization etc.
In visual processing unit of the present invention, can realize having the visual processing of visual characteristic of picture signal or realization and have visual processing the nonlinear characteristic of the machine of output signal output.
Remarks 3 described visual processing unit are according to remarks 2 described visual processing unit, in 2 dimension LUT, and the both sides of picture signal and processing signals, and have nonlinear relation between the output signal.
At this, the both sides of so-called picture signal and processing signals, and have nonlinear relation between the output signal, the value of each key element that for example is meant 2 dimension LUT is with the nonlinear function representation of 2 variablees of picture signal and processing signals correspondence, perhaps is meant to be difficult to by function formulaization etc.
In visual processing unit of the present invention,, under the different situation of the value of processing signals, also can realize different visual processing according to the value of processing signals even for example the value of picture signal is identical.
Remarks 4 described visual processing unit are according to each the described visual processing unit in the remarks 1~3, and the value of each key element of 2 dimension LUT determines in interior mathematical expression based on the computing that comprises the value that reinforcement calculates according to picture signal and processing signals.
At this, the so-called value of calculating according to picture signal and processing signals is for example by the resulting value of the arithmetic between picture signal and the processing signals or by computing picture signal or processing signals to be carried out the resulting value of value etc. after the conversion by certain function.The computing of the noise contribution that the so-called computing of strengthening is computing, the computing that suppresses the contrast of transition of for example adjusting gain, suppress little amplitude etc.
In visual processing unit of the present invention, can strengthen the value of calculating according to picture signal and processing signals.
Remarks 5 described visual processing unit are according to remarks 4 described visual processing unit, and processing signals is for the picture signal between the neighboring pixel of concerned pixel and concerned pixel, the signal that carries out certain processing.
At this, so-called certain processing is the neighboring pixel of a spatial manipulation etc. for example use to(for) concerned pixel, is the processing that the mean value between concerned pixel and the neighboring pixel, maximum or minimum value etc. are derived.
In visual processing unit of the present invention, even, also can realize different visual processing because of the influence of surrounding pixel for example for the visual processing of the concerned pixel of identical value.
Remarks 6 described visual processing unit are that the computing of reinforcement is non-linear function according to remarks 4 or 5 described visual processing unit.
In visual processing unit of the present invention, the reinforcement or realize that for example can realize having the visual characteristic of picture signal has the reinforcement with the nonlinear characteristic of the machine of output signal output.
Visual processing unit shown in the remarks 7 is according to each the described visual processing unit in the remarks 4~6, and the computing of reinforcement is the reinforcement function that the difference of each transformed value of the conversion of stipulating for picture signal and processing signals is strengthened.
At this, the function of the noise contribution that the so-called function of strengthening is function, the function that suppresses the contrast of transition of for example adjusting gain, suppress little amplitude etc.
In visual processing unit of the present invention, after picture signal and processing signals are transformed into different spaces, can strengthen each difference.Like this, can realize reinforcement of having visual characteristic etc.
Remarks 8 described visual processing unit are according to remarks 7 described visual processing unit, the value C of each key element of 2 dimension LUT, be value A, value B, the transforming function transformation function F1 of processing signals, the inverse transform function F2 of transforming function transformation function F1, reinforcement function F 3, determine based on mathematical expression F2 (F1 (A)+F3 (F1 (A)-F1 (B))) for picture signal.
At this, 2 LUT (following identical in this hurdle) that import the value C of each corresponding key element between the 2 dimension LUT, the value A that provides picture signal and the value B of processing signals.And the value of each signal can be the value of each signal, also can be the approximation (following identical in this hurdle) of value.The function of the noise contribution that the so-called function F 3 of strengthening is function, the functions that suppress the contrast of transition of for example adjusting gain, suppress little amplitude etc.
Value C in this each key element is expressed as follows.That is, the value A of picture signal and the value B of processing signals are transformed into value on the different spaces by transforming function transformation function F1.Difference between the value of the picture signal after the conversion and the value of processing signals is expressed as clear signal on the different spaces for example etc.By strengthening picture signal after the conversion that function F 3 strengthened and the difference between the processing signals, with the picture signal addition after the conversion.Like this, the value C of each pixel, the value after being illustrated in clear signal composition on the different spaces and being reinforced.
In visual processing unit of the present invention, for example use the value A be transformed into the picture signal behind the different spaces and the value B of processing signals, can carry out the processing of edge strengthening, contrast reinforcement etc. on different spaces.
Remarks 9 described visual processing unit are that transforming function transformation function F1 is a logarithmic function according to remarks 8 described visual processing unit.
At this, human visual characteristic generally is a logarithm.So if be transformed into the number space, and carry out the processing of picture signal and processing signals, then can be suitable for the processing of visual characteristic.
In visual processing unit of the present invention, can carry out the dynamic range compression that local contrast is strengthened or kept to the high contrast of visual effect.
Remarks 10 described visual processing unit are that inverse transform function F2 is the gamma correction function according to remarks 8 described visual processing unit.
At this, to picture signal, the gamma characteristic according to generally picture signal being carried out the machine of input and output imposes gamma correction by the gamma correction function.
In visual processing unit of the present invention, by transforming function transformation function F1, remove the gamma correction of picture signal, also can handle according to linear characteristic.Like this, recoverable is optic fuzzy.
Remarks 11 described visual processing unit are according to each the described visual processing unit in the remarks 4~6, and the computing of reinforcement is the reinforcement function that the ratio between picture signal and the processing signals is strengthened.
At visual processing unit of the present invention, the ratio between picture signal and the processing signals for example, the clear composition of presentation video signal.Therefore, for example can strengthen the visual processing of clear composition.
Remarks 12 described visual processing unit are according to remarks 11 described visual processing unit, the value C of each key element of 2 dimension LUT, be value A, the value B of processing signals, dynamic range compression function F 4, reinforcement function F 5, determine based on mathematical expression F4 (A) * F5 (A/B) for picture signal.
At this, the value C of each key element is expressed as follows.That is, the division amount (A/B) between the value A of picture signal and the value B of processing signals is represented for example clear signal.And F5 (A/B) represents for example amount of reinforcement of clear signal.These, expression with the value A of picture signal and the value B of processing signals are transformed into the number space, each difference is carried out the processing of intensive treatment equivalence, be suitable for the intensive treatment of visual characteristic.
In visual processing unit of the present invention,, can strengthen local contrast simultaneously Yi Bian carry out the compression of dynamic range as required.
Remarks 13 described visual processing unit are that dynamic range compression function F 4 is monotone increasing functions according to remarks 12 described visual processing unit.
In visual processing unit of the present invention, use as the dynamic range compression function F 4 that separately increase function on one side, carry out the dynamic range compression function, strengthen local contrast on one side.
Remarks 14 described visual processing unit are according to remarks 13 described visual processing unit,
Dynamic range compression function F 4 is the functions that raise up.
In visual processing unit of the present invention, use dynamic range compression function F 4 as the function that raise up on one side, carrying out dynamic range compression, can strengthen local contrast simultaneously.
Remarks 15 described visual processing unit are that dynamic range compression function F 4 is power functions according to remarks 12 described visual processing unit.
In visual processing unit of the present invention,, carry out the conversion of dynamic range, Yi Bian local contrast is strengthened on one side can use dynamic range compression function F 4 as power function.
Remarks 16 described visual processing unit are that dynamic range compression function F 4 is direct proportion functions of proportionality coefficient 1 according to remarks 12 described visual processing unit.
In visual processing unit of the present invention, can strengthen contrast equably from dark portion to the bright portion of picture signal.This contrast is strengthened, and is the intensive treatment that is suitable for visual characteristic.
Remarks 17 described visual processing unit are according to each the described visual processing unit in the remarks 12~17, strengthen function F 5, are power functions.
In visual processing unit of the present invention, carry out the conversion of dynamic range on one side can use as the reinforcement function F 5 of power function, can strengthen local contrast simultaneously
Remarks 18 described visual processing unit are according to remarks 11 described visual processing unit, and mathematical expression further comprises for picture signal of strengthening by the reinforcement function and the ratio between the processing signals, carries out the computing of dynamic range compression.
In visual processing unit of the present invention,, can carry out the compression of dynamic range simultaneously on one side for example can strengthen the clear composition of the picture signal that the ratio between picture signal and the processing signals represents.
Visual processing unit shown in the remarks 19 is according to each the described visual processing unit in the remarks 4~6, and the computing of reinforcement comprises the function of strengthening the difference between picture signal and the processing signals according to the value of picture signal.
In visual processing unit of the present invention,, can strengthen clear signal as the picture signal of the difference between picture signal and the processing signals etc. for example according to the value of picture signal.Therefore, can carry out suitable reinforcement from dark portion to the bright portion of picture signal.
Remarks 20 described visual processing unit are according to remarks 19 described visual processing unit, the value C of each key element of 2 dimension LUT, be value A, the value B of processing signals, amount of reinforcement adjustment function F 6, reinforcement function F 7, dynamic range compression function F 8, determine based on mathematical expression F8 (A)+F6 (A) * F7 (A-B) for picture signal.
At this, the value C of each key element is expressed as follows.That is, the difference (A-B) between the value A of picture signal and the value B of processing signals is expressed as for example clear signal.And F7 (A-B) is expressed as for example amount of reinforcement of clear signal.Further, amount of reinforcement is adjusted function F 6 by amount of reinforcement, be adjusted according to the value A of picture signal, as required for carried out dynamic range compression the value addition of picture signal.
In visual processing unit of the present invention, though for example the value A of picture signal is bigger, can reduce amount of reinforcement, keep from the contrast of dark portion to bright portion.And,, also can keep from the local contrast of dark portion to bright portion even under the situation of carrying out dynamic range compression.
Remarks 21 described visual processing unit are that dynamic range compression function F 8 is monotone increasing functions according to remarks 20 described visual processing unit.
In visual processing unit of the present invention,, carry out dynamic range compression, Yi Bian keep local contrast on one side can use dynamic range compression function F 8 as monotone increasing function.
Remarks 22 described visual processing unit are that dynamic range compression function F 8 is the functions that raise up according to remarks 21 described visual processing unit.
In visual processing unit of the present invention, use dynamic range compression function F 8 as the function that raises up, carry out dynamic range compression and can keep local contrast on one side.
Remarks 23 described visual processing unit are that dynamic range compression function F 8 is power functions according to remarks 20 described visual processing unit.
In visual processing unit of the present invention, carry out the conversion of dynamic range on one side can use as the dynamic range compression function F 8 of power function, can keep local contrast simultaneously.
Remarks 24 described visual processing unit are that dynamic range compression function F 8 is direct proportion coefficients of proportionality coefficient 1 according to remarks 20 described visual processing unit.
In visual processing unit of the present invention, can strengthen contrast equably till the dark portion of picture signal to the portion obviously.
Remarks 25 described visual processing unit are according to remarks 19 described visual processing unit, and mathematical expression further comprises the value of strengthening for by the computing of strengthening, and add the computing of picture signal being carried out the value after the dynamic range compression.
In visual processing unit of the present invention, for example, the clear one-tenth of can be on one side strengthening picture signal according to the value of picture signal grades, and can carry out the compression of dynamic range simultaneously.
Remarks 26 described visual processing unit are according to each the described visual processing unit in the remarks 4~6, and the computing of reinforcement is the reinforcement function that the difference between picture signal and the processing signals is strengthened.Mathematical expression is for by strengthening the value that function is strengthened, and adds that the value after the value of picture signal carries out the computing of gray correction.
In visual processing unit of the present invention, for example, poor between picture signal and the processing signals, the clear composition of presentation video signal.Therefore, the picture signal for strengthening clear composition can realize carrying out the visual processing of gray correction.
Remarks 27 described visual processing unit are according to remarks 26 described visual processing unit, the value C of each key element of 2 dimension LUT, be value A, the value B of processing signals, reinforcement function F 9, gray correction function F 10, determine based on mathematical expression F10 (A+F9 (A-B)) for picture signal.
At this, the value C of each key element is expressed as follows.That is, the difference (A-B) between the value A of picture signal and the value B of processing signals is represented for example clear signal.And F9 (A-B) represents for example intensive treatment of clear signal.Further, expression to the value A of picture signal be reinforced processing after clear signal between with carry out gray correction.
In visual processing unit of the present invention, can obtain making contrast to strengthen and the gray correction effect of Combination.
Remarks 28 described visual processing unit make the computing of strengthening according to each the described visual processing unit in the remarks 4~6, are the reinforcement functions that the difference between picture signal and the processing signals is strengthened.Mathematical expression also comprises: for by strengthening the value that function has been strengthened, add the computing of picture signal being carried out the value after the gray correction.
In visual processing unit of the present invention, the difference between picture signal and the processing signals for example, the clear composition of presentation video signal.And the reinforcement of clear composition and the gray correction of picture signal are carried out separately.Therefore, no matter the gray correction amount of picture signal how, all can be carried out the reinforcement of certain clear composition.
Remarks 29 described visual processing unit are according to remarks 28 described visual processing unit, for the value B of the value A of value C, the picture signal of each key element of 2 dimension LUT, processing signals, strengthen function F 11, gray correction function F 12, determine based on mathematical expression F12 (A)+F11 (A-B).
At this, the value C of each key element is expressed as follows.That is, the difference (A-B) between the value A of picture signal and the value B of processing signals is represented for example clear signal.And F11 (A-B) represents for example intensive treatment of clear signal.And then, the value of the picture signal after expression will be handled by gray scale and the clear signal addition that is reinforced after the processing.
In visual processing unit of the present invention,, gray correction strengthens no matter how, all can carrying out certain contrast.
Remarks 30 described visual processing unit are according to each the described visual processing unit in the remarks 1~29, in 2 dimension LUT, the value of preserving for the picture signal and the processing signals of identical value, the value of picture signal and processing signals has dull increasing or the dull relation that reduces relatively.
At this, the 2 dimension values that LUT preserved of the picture signal of identical value and processing signals correspondence, the summary of the characteristic of expression 2 dimension LUT.
In visual processing unit of the present invention, 2 dimension LUT increase or the dull value that reduces relative picture signal and processing signals are dull, as for the picture signal of identical value and the value of processing signals correspondence.
Remarks 31 described visual processing unit are according to each the described visual processing unit in the remarks 1~3, and 2 dimension LUT preserve the gray-scale transformation curve group of forming as by many gray-scale transformation curves with the relation between picture signal and the output signal.
At this, what brightness, the brightness that so-called gray-scale transformation curve group is a picture signal was such imposes the set of the gray-scale transformation curve that gray scale handles to pixel value.
In visual processing unit of the present invention, use the gray-scale transformation curve of from many gray-scale transformation curves, selecting, can carry out the gray scale of picture signal and handle.Therefore, can carry out more suitable gray scale handles.
Remarks 32 described visual processing unit are according to remarks 31 described visual processing unit, each of gray-scale transformation curve group, the value of relative picture signal, dull increasing.
In visual processing unit of the present invention, can use the dull gray-scale transformation curve group that increases of value of relative picture signal, carry out gray scale and handle.
Remarks 33 described visual processing unit are according to remarks 31 or 32 described visual processing unit, and processing signals is the signal that is used for selecting from a plurality of gray-scale transformation curve groups corresponding gray-scale transformation curve.
At this, processing signals is the signal that is used to select gray-scale transformation curve, for example, and by picture signal after the spatial manipulation etc.
In visual processing unit of the present invention, can use by the selected gray-scale transformation curve of processing signals, carry out the gray scale of picture signal and handle.
Remarks 34 described visual processing unit are according to remarks 33 described visual processing unit, the value of processing signals, and at least 1 gray-scale transformation curve that comprises with many gray-scale transformation curve groups is associated.
At this,, select the gray-scale transformation curve that uses in 1 gray scale processing at least by the value of processing signals.
In visual processing unit of the present invention,, select 1 gray-scale transformation curve at least by the value of processing signals.And then, use selected gray-scale transformation curve, carry out the gray scale of picture signal and handle.
Remarks 35 described visual processing unit are according to each the described visual processing unit in the remarks 1~34, the description document data that login is made in advance by the computing of regulation in 2 dimension LUT.
In visual processing unit of the present invention, use 2 dimension LUT of the description document data of login making in advance, carry out visual processing.When visual processing, do not need to make the processing of description document data etc., just can realize the high speed of the execution speed of visual processing.
Remarks 36 described visual processing unit are according to remarks 35 described visual processing unit, and 2 dimension LUT can change by the login of description document data.
At this, so-called description document data are the data that realize 2 dimension LUT of different visual processing.
In visual processing unit of the present invention,, can carry out various changes to the visual processing that is realized by the login of description document data.That is, need not constitute the hardware of visual processing unit and change, just can realize various visual processing.
Remarks 37 described visual processing unit are according to remarks 35 or 36 described visual processing unit, also possess the description document data entry mechanism that is used in vision processor structure login description document data.
At this, description document data entry mechanism logins precalculated description document data according to visual processing in the vision processor structure.
In visual processing unit of the present invention,, thereby can carry out various changes to the format analysis processing that is realized by the login of description document data.That is, the hardware formation to visual processing unit does not change, and just can realize various visual processing.
Remarks 38 described visual processing unit are according to remarks 35 described visual processing unit, and the vision processor structure is by the description document data of external device (ED) acquisition made.
The description document data are made in advance by external device (ED).So-called outside device is for example to have the program that can make the description document data and the computer of CPU etc.The vision processor structure obtains the description document data.Obtain by for example network or recording medium.The vision processor structure uses the description document data that obtained, and carries out visual processing.
In visual processing unit of the present invention, use description document data by the external device (ED) made, can carry out visual processing.
Remarks 39 described visual processing unit are according to remarks 38 described visual processing unit, pass through the description document data that obtained, can change 2 dimension LUT.
In visual processing unit of the present invention, the description document data that obtained are used as 2 dimension LUT and login again.Like this, can change, realize different visual processing 2 dimension LUT.
Remarks 40 described visual processing unit are that communication process mechanism obtains the description document data by communication network according to remarks 38 or 39 described visual processing unit.
At this, so-called communication network is the bindiny mechanism that can communicate by letter such as special circuit, public line, the Internet, LAN for example, both can be wired also can be wireless.
In visual processing unit of the present invention, use by the description document data that communication network obtained, can realize visual processing.
Remarks 41 described visual processing unit are according to remarks 35 described visual processing unit, also possess: the description document data creating mechanism that makes the description document data.
Description document data creating mechanism is a characteristic of for example using picture signal and processing signals etc., is described the mechanism of the making of file data.
In visual processing unit of the present invention, use description document data by description document data creating mechanism made, can realize visual processing.
Remarks 42 described visual processing unit are according to remarks 41 described visual processing unit, and description document data creating mechanism based on the histogram of the gamma characteristic of picture signal, makes the description document data.
In visual processing unit of the present invention, use realizes visual processing based on the description document data of the histogram made of the gamma characteristic of picture signal.Therefore, according to the characteristic of picture signal, can realize suitable visual processing.
Remarks 43 described visual processing unit are according to remarks 35 described visual processing unit, the description document data of in 2 dimension LUT, logining, and condition according to the rules is switched.
In visual processing unit of the present invention, the description document data of using condition according to the rules to be switched realize visual processing.Therefore, can realize more suitable visual processing.
Remarks 44 described visual processing unit are that so-called defined terms is the condition relevant with shading value according to remarks 43 described visual processing unit.
In visual processing unit of the present invention,, can realize more suitable visual processing based on the condition relevant with shading value.
Remarks 45 described visual processing unit are that shading value is the shading value of picture signal according to remarks 44 described visual processing unit.
In visual processing unit of the present invention,, can realize more suitable visual processing based on the condition relevant with the shading value of picture signal.
Remarks 46 described visual processing unit are according to remarks 45 described visual processing unit, also possess the lightness decision mechanism of the shading value of process decision chart image signal.The description document data of logining among the LUT in 2 dimensions are switched according to the result of determination of lightness decision mechanism.
At this, lightness decision mechanism is based on the pixel value of the brightness of for example picture signal, lightness etc., the shading value of process decision chart image signal.And then, according to result of determination, switch the description document data.
In visual processing unit of the present invention,, can realize more suitable visual processing according to the shading value of picture signal.
Remarks 47 described visual processing unit are according to remarks 44 described visual processing unit, also possess the lightness input mechanism that the condition relevant with shading value is transfused to.The description document data of logining among the LUT in 2 dimensions are switched according to the input results of lightness input mechanism.
At this, the lightness input mechanism, be for example make that the user imports the condition relevant with shading value pass through wired or wireless switch that connects etc.
In visual processing unit of the present invention, the user judges the condition relevant with shading value, via the lightness input mechanism, can be described the switching of file data.Therefore, can realize suitable visual processing by the user.
Remarks 48 described visual processing unit are according to remarks 47 described visual processing unit, and the lightness input mechanism is transfused to the shading value of the input environment of the shading value of output environment of output signal or input signal.
At this, the shading value of so-called output environment, be meant for example computer, television set, digital camera, portable phone, PDA etc. with the shading value of the surround lighting of the medium periphery of output signal output or print paper etc. with the shading value of the medium of output signal output itself etc.The shading value of so-called input environment is meant and for example scans with the shading value with the medium of input signal input itself such as paper.
In visual processing unit of the present invention, for example the user judges the relevant conditions such as shading value in room, via the shading value input mechanism, can be described the switching of file data.Therefore, can realize visual processing more suitable for the user.
Remarks 49 described visual processing unit are that also possess: it detects the shading value of 2 kinds at least lightness testing agency according to remarks 44 described visual processing unit.The description document data of logining among the LUT in 2 dimensions are switched according to the testing result of lightness testing agency.
At this, what is called lightness testing agency, be for example based on the pixel value of the brightness of picture signal, lightness etc., mechanism that mechanism that the shading value of picture signal is detected or light-sensitive element etc. detect the shading value of output environment or input environment or mechanism that the relevant condition of shading value by user's input is detected etc.In addition, the shading value of output environment, be for example computer, television set, digital camera, portable phone, PDA etc. with the shading value of the surround lighting of the medium periphery of output signal output or print paper etc. with the shading value of the medium of output signal output itself etc.The shading value of so-called input environment is meant and for example scans with the shading value with the medium of input signal input itself such as paper.
In visual processing unit of the present invention, 2 kinds to shading value detect at least, are described the switching of file data according to these.Therefore, can realize more suitable visual processing.
Remarks 50 described visual processing unit are according to remarks 49 described visual processing unit, the shading value that shading value testing agency is detected comprises: the shading value of the shading value of the shading value of picture signal, the output environment of output signal or the input environment of input signal.
In visual processing unit of the present invention, the shading value according to the input environment of the shading value of the output environment of the shading value of picture signal, output signal or input signal can realize more suitable visual processing.
Remarks 51 described visual processing unit are according to remarks 43 described visual processing unit, also possess the selection mechanism of description document data, and it makes the people carry out the selection of the description document data logined among the LUT in 2 dimensions.The description document data of logining among the LUT in 2 dimensions are switched according to the selection result of description document data selection mechanism.
The selection mechanism of description document data, it makes the user be described the selection of file data.And then, in visual processing unit, use selected description document data, realize visual processing.
In visual processing unit of the present invention, the user selects description document according to hobby, can realize visual processing.
Remarks 52 described visual processing unit are according to remarks 51 described visual processing unit, and the selection mechanism of description document data is the input unit that is used to be described the selection of file.
At this, input unit is for example to be built in or the switch that is connected by wired or wireless and visual processing unit etc.
In visual processing unit of the present invention, the user uses input unit, can select the description document of liking.
Remarks 53 described visual processing unit are according to remarks 43 described visual processing unit, also possess the picture characteristics decision mechanism that the picture characteristics of picture signal is judged.The description document data of logining among the LUT in 2 dimensions are switched according to the judged result of picture characteristics decision mechanism.
The picture characteristics decision mechanism is judged the picture characteristics such as brightness, lightness or spatial frequency of picture signal.Visual processing unit uses the description document data of switching according to the judged result of picture characteristics decision mechanism, realizes visual processing.
In visual processing unit of the present invention, the picture characteristics decision mechanism is selected the corresponding description document data of picture characteristics automatically.Therefore, can use, realize visual processing for picture signal description document data more suitably.
Remarks 54 described visual processing unit are according to remarks 43 described visual processing unit, also possess the User Recognition mechanism that the user is discerned.The description document data of logining among the LUT in 2 dimensions are switched according to the recognition result of User Recognition mechanism.
User Recognition mechanism is for example to be used to discern user's input unit or camera etc.
In visual processing unit of the present invention, can realize being suitable for the user's that User Recognition mechanism discerned visual processing.
Remarks 55 described visual processing unit are according to each the described visual processing unit in the remarks 1~54, and the vision processor structure carries out interpolation operation to the value that 2 dimension LUT preserve, and output signal is exported.
2 dimension LUT for the value of the picture signal of predetermined distance or the value of processing signals, preserve its value.By to carrying out interpolation operation with the value of value that comprises the picture signal that is transfused to or processing signals in the values of interior 2 interval corresponding dimension LUT, thereby the value of the output signal of the value correspondence of the value of the picture signal that is transfused to or processing signals is output.
In visual processing unit of the present invention, all values that need can get for picture signal or processing signals not, the value of preserving 2 dimension LUT just can be cut down the memory capacity that is used for 2 dimension LUT.
Remarks 56 described visual processing unit are according to remarks 55 described visual processing unit, and interpolation operation is based at least one side's of the picture signal shown by 2 system numerical tables or processing signals the linear interpolation of value of low level position.
2 dimension LUT, high-order the corresponding value of value of in store and picture signal or processing signals.The vision processor structure, by value by the low level position of picture signal or processing signals, to comprise the picture signal that is transfused to or the value of processing signals and carry out linear interpolation in the values of interior 2 interval corresponding dimension LUT, thereby output signal is exported.
In visual processing unit of the present invention, with still less memory capacity storage 2 tie up LUTs on one side, can realize more accurate visual processing simultaneously.
Remarks 57 described visual processing unit are that the input signal processing mechanism carries out spatial manipulation for picture signal according to each the described visual processing unit in the remarks 1~56.
In visual processing unit of the present invention, can use picture signal and, realize visual processing by 2 dimension LUT by the picture signal after the spatial manipulation.
At remarks 58 described visual processing unit is according to remarks 57 described visual processing unit, and the input signal processing mechanism generates unsharp signal according to picture signal.
At this, so-called unsharp signal is meant the signal that directly or indirectly imposes spatial manipulation for picture signal.
In visual processing unit of the present invention, can use picture signal and unsharp signal, realize visual processing by 2 dimension LUT.
Remarks 59 described visual processing unit are according to remarks 57 or 58 described visual processing unit, in spatial manipulation, with mean value, maximum or the minimum value derivation of picture signal.
At this, so-called mean value can be the simple average of for example picture signal, also can be weighted average.
In visual processing unit of the present invention, can use mean value, maximum or the minimum value of picture signal and average signal, can realize visual processing by 2 dimension LUT.
Remarks 60 described visual processing unit are according to each the described visual processing unit in the remarks 1~59, and the vision processor structure uses the picture signal and the processing signals that are transfused to, carry out spatial manipulation and gray scale and handle.
In visual processing unit of the present invention, use 2 dimension LUT, handle and the gray scale processing implementation space simultaneously.
Remarks 61 described visual processing methods possess: input signal treatment step and visual treatment step.The input signal treatment step carries out certain processing for the picture signal that is transfused to, and processing signals is exported.Visual treatment step is based on giving the picture signal that is transfused to and processing signals and as by 2 dimension LUT of the relation between the output signal of the picture signal after the visual processing, output signal being exported.
At this, so-called certain processing for example is meant for the direct or indirect processing of picture signal, comprises the in addition processing of conversion of the pixel value of picture signals such as spatial manipulation or gray scale processing.
In visual processing method of the present invention, use and to have put down in writing picture signal and processing signals and, to have carried out visual processing by 2 dimension LUT of the relation between the output signal after the visual processing.Therefore, can make visual processing high speed.
Remarks 62 described visual handling procedures are the visual handling procedures that are used for carrying out by computer visual processing method, make computer carry out the visual processing method that possesses input signal treatment step and visual treatment step.The input signal treatment step carries out certain processing for the picture signal that is transfused to, and processing signals is exported.Visual treatment step is based on giving based on giving the picture signal that is transfused to and processing signals and as by 2 dimension LUT of the relation between the output signal of the picture signal after the visual processing, output signal being exported.
At this, so-called certain processing is for example for the direct or indirect processing of picture signal, comprises to the in addition processing of conversion such as the pixel value of the picture signal of spatial manipulation or gray scale processing etc.
In visual handling procedure of the present invention, use and to have put down in writing picture signal and processing signals and, to have carried out visual processing by 2 dimension LUT of the relation between the output signal after the visual processing.Therefore, can make visual processing high speed.
Remarks 63 described integrated circuits comprise each the described visual processing unit in the remarks 1~60.
In integrated circuit of the present invention, can obtain with remarks 1~60 in each described visual processing unit.
In integrated circuit of the present invention, can obtain with remarks 1~60 in the same effect of each described visual processing unit.
In the remarks 64 described display unit, possess: the described visual processing unit of each in the remarks 1~60 and carrying out from the indication mechanism of the demonstration of the output signal of visual processing unit output.
In display unit of the present invention, can obtain with remarks 1~60 in the same effect of each described visual processing unit.
Remarks 65 described filming apparatus possess: carry out the shooting of image photographic unit, will by the captured image of photographic unit as picture signal, carry out each the described visual processing unit in the remarks 1~60 of visual processing.
In filming apparatus of the present invention, can obtain with remarks 1~60 in the same effect of each described visual processing unit.
Remarks 66 described portable information terminals possess: Data Receiving mechanism, the view data of its received communication or broadcasting; With the view data that is received,, carry out each the described visual processing unit in the remarks 1~60 of visual processing as picture signal; And indication mechanism, it carries out being undertaken by above-mentioned visual processing unit the demonstration of the picture signal after the visual processing.
In portable information terminal of the present invention, can obtain with remarks 1~60 in the same effect of each described visual processing unit.
Remarks 67 described portable information terminals possess: photographic unit, and it carries out the shooting of image;
Will be by the captured image of photographic unit, as picture signal, carry out each the described visual processing unit in the remarks 1~60 of visual processing; With
The data transmitter structure, it sends by the picture signal of visual processing.
In portable information terminal of the present invention, can obtain with remarks 1~60 in the same effect of each described visual processing unit.
Remarks 68 described image processing apparatus are image processing apparatus of the image processing of the received image signal that is transfused to, possess: description document data creating mechanism and image processing actuator.Description document data creating mechanism, it makes the description document data of using in the image processing based on a plurality of description document data that are used to carry out different image processing.Image processing actuator, it uses the description document data by description document data creating mechanism made, carries out image processing.
At this, so-called image processing is (following identical in this hurdle) such as look processing of the visual processing of for example spatial manipulation or gray scale processing etc. or look conversion etc.
And so-called description document data are meant the coefficient matrix data that for example are used to carry out for the computing of received image signal; Or preserve (following identical in this hurdle) such as list datas of carrying out the value of the received image signal after the image processing for the value of received image signal.
Image processing apparatus of the present invention based on a plurality of description document data, is made new description document data.Therefore, even pre-prepd description document data are minorities, also can carry out many different image processing.That is, can cut down the memory capacity that is used to store the description document data.
Remarks 69 described image processing apparatus are image processing apparatus of the image processing of the received image signal that is transfused to, possess: description document information output mechanism and image processing actuator.Description document information output mechanism, the description document information output that the description document data that will be used for that image processing is used are determined.Image processing actuator based on the information of being exported from description document information output mechanism, uses determined description document data to carry out image processing.
At this, so-called description document information, is used for information that other description document data are determined etc. at the parameter information of the feature of the information of the number of be meant description document data for example, the description document data being determined etc., the processing of expression description document data.
In image processing apparatus of the present invention, based on description document information, the description document data are controlled, can carry out image processing.
In the remarks 70 described image processing apparatus, be according to remarks 69 described image processing apparatus, description document information output mechanism according to showing by the display environment of the received image signal after the image processing, is exported description document information.
At this, so-called display environment, be meant surround lighting for example shading value or color temperature, carry out device shown, shown image size, can see the position relation between the user of shown image and shown image, user-dependent information etc.
In image processing apparatus of the present invention, can carry out and the corresponding image processing of display environment.
Image processing apparatus shown in the remarks 71 is according to remarks 69 described image processing apparatus, and description document information output mechanism according to information relevant with the description document data in the information that comprises in the received image signal, is exported description document information.
The so-called information relevant with the description document data, the information of the number of be meant description document data for example, the description document data being determined etc., represent the description document data processing feature parameter information, be used for information that other description document data are determined etc.
In image processing apparatus of the present invention, obtain the information relevant according to received image signal with the description document data, can carry out image processing.
Remarks 72 described image processing apparatus are according to remarks 69 described image processing apparatus, and description document information output mechanism according to the information relevant with the feature of the image processing that is obtained, is exported description document information.
The so-called information relevant with the feature of image processing is meant the information about the feature of the parameter of image processing, for example the value of the parameter in the adjustment of shading value, image quality, color etc.
In image processing apparatus of the present invention, the relevant information of handling by input picture according to user's hobby of feature for example, thus can carry out image processing.
Remarks 73 described image processing apparatus are according to remarks 69 described image processing apparatus, and description document information output mechanism according to the information of the environmental correclation that generates received image signal, is exported description document information.
The information of the environmental correclation of what is called and generation received image signal is meant to comprise for example at information of being correlated with by the shooting environmental of taking under the situation that writes down received image signal or the shooting License Info in the shooting environmental etc.
In image processing apparatus of the present invention,, can carry out image processing according to the information of the environmental correclation that generates received image signal.
Remarks 74 described image processing apparatus are according to remarks 69 described image processing apparatus, and received image signal comprises the attribute information of view data and received image signal.Description document information output mechanism.According to attribute information, description document information is exported.
In image processing apparatus of the present invention, can carry out image processing according to the attribute information of received image signal.Therefore, can be suitable for the image processing of received image signal.
Remarks 75 described image processing apparatus are according to remarks 74 described image processing apparatus, and so-called attribute information comprises the integrity attribute information relevant with the integral body of view data.
So-called integrity attribute information is for example relevant with the making of the view data integral body information or the content-related information of view data integral body etc.
In image processing apparatus of the present invention, can carry out image processing according to integrity attribute information.Therefore, can be suitable for the image processing of view data.
Remarks 76 described image processing apparatus are according to remarks 74 or 75 described image processing apparatus, and so-called attribute information comprises the attribute information relevant with the part of view data.
So-called part attribute information for example comprises the information relevant with the scene content of the part of view data etc.
At image processing apparatus of the present invention, can carry out image processing according to the part attribute information.Therefore, can be suitable for the image processing of view data.
Remarks 77 described image processing apparatus are according to remarks 74 described image processing apparatus, and so-called attribute information comprises the build environment attribute information with the environmental correclation that generates received image signal.
So-called build environment attribute information, be and take, record, the information of making the environmental correclation of received image signal, for example with the information that generates the environmental correclation when the received image signal or generate the action message etc. of the employed machine of received image signal.
In image processing apparatus of the present invention, can carry out image processing according to the build environment attribute information.Therefore, can be suitable for the image processing of received image signal.
Remarks 78 described image processing apparatus are according to remarks 74 described image processing apparatus, and so-called attribute information is meant the media property information relevant with the medium that obtains received image signal.
So-called media property information is meant broadcast medium, communication media, recording medium etc., obtains the relevant information of medium of received image signal.
In image processing apparatus of the present invention,, can carry out image processing according to media property information.Therefore, can be suitable for the image processing of the attribute of medium.
Remarks 79 described image processing apparatus are according to each the described image processing apparatus in the remarks 68~78, and the description document data are 2 dimension LUT.Image processing actuator comprises each the described visual processing unit in the remarks 1~60.
In image processing apparatus of the present invention, obtain with remarks 68~78 in the same effect of each described image processing apparatus.And then, obtain with remarks 1~60 in the same effect of each described visual processing unit.
Remarks 80 described image processing apparatus possess: image processing actuator, description document information output mechanism, description document information output mechanism.Image processing actuator carries out image processing to the received image signal of being imported.Description document information output mechanism, the description document information output that the description document data of the image processing of the received image signal that will be used for importing being suitable for are determined.Description document information additional mechanism providing additional operation, it is for received image signal or by the received image signal after the performed image processing of image processing actuator, and additional description document information is also exported.
By image processing apparatus of the present invention, can carry out carrying out the received image signal after the image processing, the processing that is associated with description document information with received image signal or by image processing actuator.Therefore, the device that acquisition has added the signal of description document information for this signal, can carry out suitable image processing easily.
Remarks 81 described integrated circuits comprise each the described image processing apparatus in the remarks 68~80.
In integrated circuit of the present invention, can obtain with remarks 68~80 in the identical effect of each described image processing apparatus.
Remarks 82 described display unit possess: described image processing apparatus of each in the remarks 68~80 and the indication mechanism that carries out being undertaken by image processing apparatus the demonstration of the received image signal after the image processing.
In display unit of the present invention, can obtain with remarks 68~80 in the same effect of each described image processing apparatus.
Remarks 83 described filming apparatus possess: photographic unit, and it carries out the shooting of image; With will be by the captured image of photographic unit as received image signal, carry out each the described image processing apparatus in the remarks 68~80 of image processing.
In filming apparatus of the present invention, can obtain with remarks 68~80 in the same effect of each described image processing apparatus.
Portable information terminal shown in the remarks 84 possesses: Data Receiving mechanism, and it receives the view data that sends by communication or broadcasting; With the above-mentioned view data that is received,, carry out each the described image processing apparatus in the remarks 68~80 of image processing as above-mentioned received image signal; And indication mechanism, it carries out being undertaken by above-mentioned image processing apparatus the demonstration of the above-mentioned received image signal after the image processing.
In portable information terminal of the present invention, can obtain with remarks 68~80 in the same effect of each described image processing apparatus.
Remarks 85 described portable information terminals possess: photographic unit, and it carries out the shooting of image; With will be by the captured image of photographic unit, as received image signal, carry out each the described image processing apparatus in the remarks 68~80 of image processing; With the data transmitter structure, it is to being sent by the received image signal after the image processing.
In portable information terminal of the present invention, can obtain with remarks 60~80 in the same effect of each described image processing apparatus.
(the 3rd remarks)
The present invention's (especially illustrated invention of the 1st~the 3rd execution mode) can be by following expression.In addition, in the remarks of the described subform of this column (" the 3rd remarks "), be subordinated to the described remarks of the 3rd remarks.
(contents of the 3rd remarks)
(remarks 1)
A kind of visual processing unit, it possesses:
The input signal processing mechanism, it carries out spatial manipulation for the picture signal that is transfused to, and processing signals is exported; With
Signal operation mechanism, it is exported output signal based on strengthening the computing of carrying out the difference of each value after the conversion with the above-mentioned picture signal of transfer pair of regulation and above-mentioned processing signals.
(remarks 2)
According to remarks 1 described visual processing unit,
Above-mentioned signal operation mechanism, inverse transform function F2, reinforcement function F 3 for value B, the transforming function transformation function F1 of the value A of above-mentioned picture signal, above-mentioned processing signals, above-mentioned transforming function transformation function F1, based on mathematical expression F2 (F1 (A)+F3 (F1 (A)-F1 (B))), the value C of computing output signal.
(remarks 3)
According to remarks 2 described visual processing unit,
Above-mentioned transforming function transformation function F1 is a logarithmic function.
(remarks 4)
According to remarks 2 described visual processing unit,
Above-mentioned inverse transform function F2 is the gamma correction function.
(remarks 5)
According to each the described visual processing unit in the remarks 2~4,
Above-mentioned signal operation mechanism possesses: the signal space mapping device, and it carries out the conversion of the signal space of above-mentioned picture signal and above-mentioned processing signals; With intensive treatment mechanism, its above-mentioned picture signal after to conversion and the difference signal between the above-mentioned processing signals after the conversion carry out intensive treatment; With inverse transformation mechanism, its above-mentioned picture signal after for conversion and the additive signal between the above-mentioned difference signal after the above-mentioned intensive treatment carry out the inverse transformation of signal space, with above-mentioned output signal output.
(remarks 6)
A kind of visual processing unit, it possesses:
The input signal processing mechanism, it carries out spatial manipulation for the picture signal that is transfused to, and processing signals is exported;
Signal operation mechanism, it is exported output signal based on the computing that the ratio between above-mentioned picture signal and the above-mentioned processing signals is strengthened.
(remarks 7)
According to remarks 6 described visual processing unit,
Above-mentioned signal operation mechanism, it is based on the above-mentioned computing of the dynamic range compression of further carrying out above-mentioned picture signal, with above-mentioned output signal output.
(remarks 8)
According to remarks 6 or 7 described visual processing unit,
Above-mentioned signal operation mechanism is for value B, the dynamic range compression function F 4 of the value A of above-mentioned picture signal, above-mentioned processing signals, strengthen function F 5, based on mathematical expression F4 (A) * F5 (A/B), the value C of computing output signal.
(remarks 9)
According to remarks 8 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is direct proportion functions of proportionality coefficient 1.
(remarks 10)
According to remarks 8 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is dull functions that increase.
(remarks 11)
According to remarks 10 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is the functions that raise up.
(remarks 12)
According to remarks 8 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is power functions.
(remarks 13)
According to remarks 12 described visual processing unit,
The index of the power function in the above-mentioned dynamic range compression function F 4, be based on carry out image show when as a comparison the desired value of degree the target contrast value and carry out determining as the actual contrast value of the contrast value in the display environment when image shows.
(remarks 14)
According to each the described visual processing unit in the remarks 8~13,
Above-mentioned reinforcement function F 5 is power functions.
(remarks 15)
According to remarks 14 described visual processing unit,
The index of the power function in the above-mentioned reinforcement function F 5, be based on carry out image show when as a comparison the desired value of degree the target contrast value and carry out determining as the actual contrast value of the contrast value in the display environment when image shows.
(remarks 16)
According to remarks 14 or 15 described visual processing unit,
When the value A of above-mentioned picture signal was also bigger than the value B of above-mentioned processing signals, the index of the power function in the above-mentioned reinforcement function F 5 was the dull value that reduces of value A of above-mentioned relatively picture signal.
(remarks 17)
According to remarks 14 or 15 described visual processing unit,
Also want hour than the value B of above-mentioned processing signals at the value A of above-mentioned picture signal, the index of the power function in the above-mentioned reinforcement function F 5 is the dull value that increases of value A of above-mentioned relatively picture signal.
(remarks 18)
According to remarks 14 or 15 described visual processing unit,
When the value A of above-mentioned picture signal was also bigger than the value B of above-mentioned processing signals, the index of the power function in the above-mentioned reinforcement function F 5 was the dull value that increases of value A of above-mentioned relatively picture signal.
(remarks 19)
According to remarks 14 or 15 described visual processing unit,
The index of the power function in the above-mentioned reinforcement function F 5 is the absolute value of the difference between the value B of the value A of above-mentioned relatively picture signal and above-mentioned processing signals, the dull value that increases.
(remarks 20)
According to each the described visual processing unit in the remarks 14~19,
At least one side of the maximum of above-mentioned reinforcement function F 5 or minimum value is limited within the limits prescribed.
(remarks 21)
According to remarks 8 described visual processing unit,
Above-mentioned signal operation mechanism, it has: intensive treatment mechanism, it removes later division processing signals for above-mentioned picture signal with above-mentioned processing signals and carries out intensive treatment; With the output processor structure, it is based on above-mentioned picture signal with by the above-mentioned division processing signals after the above-mentioned intensive treatment, with above-mentioned output signal output.
(remarks 22)
According to remarks 21 described visual processing unit,
Above-mentioned output processor structure, the above-mentioned division processing signals after it is handled to above-mentioned picture signal with by above-mentioned gray scale is carried out multiplication process.
(remarks 23)
According to remarks 21 described visual processing unit,
Above-mentioned output processor structure comprises the DR compressing mechanism that carries out dynamic range (DR) compression for above-mentioned picture signal, to being carried out multiplication process by above-mentioned picture signal after the above-mentioned DR compression and the above-mentioned division processing signals after the above-mentioned intensive treatment.
(remarks 24)
Each described visual processing unit according in the remarks 8~23 also possesses:
The 1st mapping device is transformed into the input image data of the 1st prescribed limit in the scope of the 2nd regulation, and as above-mentioned picture signal;
The 2nd mapping device is transformed into the above-mentioned output signal of the 3rd prescribed limit in the scope of the 4th regulation, and as output image signal,
The scope of above-mentioned the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine,
The scope of above-mentioned the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in the display environment when image shows.
(remarks 25)
According to remarks 24 described visual processing unit,
Above-mentioned dynamic range compression function F 4 is the functions that the above-mentioned image signal transformation of above-mentioned the 2nd prescribed limit become the above-mentioned output signal of above-mentioned the 3rd prescribed limit.
(remarks 26)
According to remarks 24 or 25 described visual processing unit,
Above-mentioned the 1st mapping device, the minimum value that the minimum value and the maximum of above-mentioned the 1st prescribed limit is transformed into above-mentioned the 2nd prescribed limit respectively and peaked each.
Above-mentioned the 2nd mapping device, the minimum value that the minimum value and the maximum of above-mentioned the 3rd prescribed limit is transformed into above-mentioned the 4th scope respectively and peaked each.
(remarks 27)
According to remarks 26 described visual processing unit,
Conversion in above-mentioned the 1st mapping device and above-mentioned the 2nd mapping device is each linear conversion.
(remarks 28)
According to each the described visual processing unit in the remarks 24~27,
Also possess set mechanism, it is set above-mentioned the 3rd prescribed limit.
(remarks 29)
According to remarks 28 described visual processing unit,
Above-mentioned set mechanism, it comprises: storing mechanism, its dynamic range to the display unit of carrying out image and showing is stored; And measuring mechanism, it is to measuring as the brightness of the surround lighting in the display environment when carrying out the image demonstration.
(remarks 30)
According to remarks 28 described visual processing unit,
Above-mentioned set mechanism comprises measuring mechanism, when this measuring mechanism shows the black level of display unit in display environment that carries out image and show and the brightness of white level when showing measure.
(remarks 31)
A kind of visual processing unit, it possesses:
The input signal processing mechanism, it carries out image processing for the picture signal that is transfused to, and processing signals is exported; With
Signal operation mechanism, it is exported output signal based on the computing of the difference between above-mentioned picture signal and the above-mentioned processing signals being strengthened according to the value of above-mentioned picture signal.
(remarks 32)
According to remarks 31 described visual processing unit,
Above-mentioned signal operation mechanism, it is based on for the value of being strengthened by above-mentioned reinforcement computing, adds the computing of above-mentioned picture signal being carried out the value after the dynamic range compression, with above-mentioned output signal output.
(remarks 33)
According to remarks 31 or 32 described visual processing unit,
Above-mentioned signal operation mechanism, the value B of its value A for above-mentioned picture signal, above-mentioned processing signals, amount of reinforcement are adjusted function F 6, are strengthened function F 7, dynamic range compression function F 8, based on mathematical expression F8 (A)+F6 (A) * F7 (A-B), the value C of computing output signal.
(remarks 34)
According to remarks 33 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is direct proportion functions of proportionality coefficient 1.
(remarks 35)
According to remarks 33 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is dull functions that increase.
(remarks 36)
According to remarks 35 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is the functions that raise up.
(remarks 37)
According to remarks 33 described visual processing unit,
Above-mentioned dynamic range compression function F 8 is power functions.
(remarks 38)
According to remarks 33 described visual processing unit,
Above-mentioned signal operation mechanism, it has: intensive treatment mechanism, it carries out and the corresponding intensive treatment of the pixel value of above-mentioned picture signal for the difference signal between above-mentioned picture signal and the above-mentioned processing signals; With the output processor structure, it is based on above-mentioned picture signal with by the difference signal after the above-mentioned intensive treatment, with above-mentioned output signal output.
(remarks 39)
According to remarks 33 described visual processing unit,
Above-mentioned output processor structure, it carries out the addition process between above-mentioned picture signal and the above-mentioned above-mentioned difference signal that is reinforced after the processing.
(remarks 40)
According to remarks 38 described visual processing unit,
Above-mentioned output processor structure, it comprises the DR compressing mechanism that carries out dynamic range (DR) compression for above-mentioned picture signal, to being carried out addition process by above-mentioned picture signal after the above-mentioned DR compression and the above-mentioned difference signal after the above-mentioned intensive treatment.
(remarks 41)
A kind of visual processing unit, it possesses:
The input signal processing mechanism, it carries out image processing for the picture signal that is transfused to, and processing signals is exported; With
Signal operation mechanism, it is based on for the value of strengthening the difference between above-mentioned picture signal and the above-mentioned processing signals, adds the computing of above-mentioned picture signal being carried out the value after the gray correction, and output signal is exported.
(remarks 42)
According to remarks 41 described visual processing unit,
Above-mentioned signal operation mechanism, the value B of its value A for above-mentioned picture signal, above-mentioned processing signals, transforming function transformation function F11, gray correction function F 12 based on mathematical expression F12 (A)+F11 (A-B), calculate the value C of output signal.
(remarks 43)
According to remarks 42 described visual processing unit,
Above-mentioned signal operation mechanism, it has: intensive treatment mechanism, it carries out intensive treatment for the difference signal between above-mentioned picture signal and the above-mentioned processing signals; With addition process mechanism, will carry out addition process by the above-mentioned picture signal after the gray correction with by the difference signal after the above-mentioned intensive treatment, and export as output signal.
(remarks 44)
A kind of visual processing method, it possesses:
The 1st shift step with the input image data of the 1st prescribed limit, is transformed in the scope of the 2nd regulation, and as picture signal;
The signal operation step, based on comprising above-mentioned picture signal is carried out the computing of dynamic range compression or strengthened above-mentioned picture signal and above-mentioned picture signal is carried out the computing of at least one side in the computing of the ratio between the processing signals after the spatial manipulation, with the output signal output of the 3rd prescribed limit;
The 2nd shift step with the above-mentioned output signal of above-mentioned the 3rd prescribed limit, is transformed in the scope of the 4th regulation, and as output image signal,
The scope of above-mentioned the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine,
The scope of above-mentioned the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in the display environment when image shows.
(remarks 45)
A kind of visual processing unit possesses:
The 1st mapping device with the input image data of the 1st prescribed limit, is transformed in the scope of the 2nd regulation, and as picture signal;
Signal operation mechanism, it carries out the computing of dynamic range compression or strengthens above-mentioned picture signal and above-mentioned picture signal is carried out the computing of at least one side in the computing of the ratio between the processing signals after the spatial manipulation above-mentioned picture signal based on comprising, with the output signal output of the 3rd prescribed limit;
The 2nd mapping device with the above-mentioned output signal of above-mentioned the 3rd prescribed limit, is transformed in the scope of the 4th regulation, and as output image signal,
The scope of above-mentioned the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine,
The scope of above-mentioned the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in the display environment when image shows.
(remarks 46)
A kind ofly be used to make computer to carry out the visual handling procedure of visual processing, possess:
The 1st shift step is transformed into the input image data of the 1st prescribed limit in the scope of the 2nd regulation, and as picture signal;
The signal operation step, based on comprising above-mentioned picture signal is carried out the computing of dynamic range compression or strengthened the computing of at least one side in the computing of above-mentioned picture signal and the ratio that above-mentioned picture signal is carried out the processing signals after the spatial manipulation, with the output signal output of the 3rd prescribed limit;
The 2nd shift step is transformed into the above-mentioned output signal of above-mentioned the 3rd prescribed limit in the scope of the 4th regulation, and as output image signal,
The scope of above-mentioned the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine,
The scope of above-mentioned the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in the display environment when image shows.
(explanations of the 3rd remarks)
Remarks 1 described visual processing unit possesses: input signal processing mechanism and signal operation mechanism.The input signal processing mechanism, it carries out spatial manipulation for the picture signal that is transfused to, and processing signals is exported.Signal operation mechanism, it carries out the computing of the difference of each value after the conversion based on strengthening transfer pair picture signal by regulation and processing signals, and output signal is exported.
At this, so-called spatial manipulation is meant for the picture signal that is transfused to and uses the processing of low frequency space filter or the concerned pixel of the picture signal that will be transfused to and (following identical in this hurdle) such as processing that the mean value between the surrounding pixel, maximum or minimum value etc. derive.And, so-called strengthen computing, be meant the computing of for example adjusting gain, suppress the computing of excessive contrast, (following identical in this hurdle) such as computings of suppressing the noise contribution of little amplitude.
In visual processing unit of the present invention, after picture signal and processing signals are transformed into different spaces, can strengthen the difference of each.Like this, can realize reinforcement of having visual characteristic etc.
Remarks 2 described visual processing unit are according to remarks 1 described visual processing unit, signal operation mechanism, the inverse transform function F2 of the value B of its value A, processing signals, transforming function transformation function F1, transforming function transformation function F1, reinforcement function F 3 for picture signal, based on mathematical expression F2 (F1 (A)+F3 (F1 (A)-F1 (B))), calculate the value C of output signal.
The so-called function F 3 of strengthening is functions of for example adjusting gain, suppress excessive contrast function, suppress the function etc. of the noise contribution of little amplitude.
The value C of output signal is expressed as follows.That is, the value A of picture signal and the value B of processing signals are transformed into value on another space by transforming function transformation function F1.Difference between the value of the picture signal after the conversion and the value of processing signals, expression is the clear signal on different spaces etc. for example.By strengthening picture signal after the conversion that function F 3 strengthened and the difference between the processing signals, with the picture signal addition after the conversion.Like this, the value C of output signal, expression will be in the value after the clear composition on the different spaces is strengthened.
In visual processing unit of the present invention, for example, use the value A be transformed into the picture signal after the different spaces and the value B of processing signals, can realize the processing such as edge strengthening, contrast reinforcement on the different spaces.
Remarks 3 described visual processing unit are that transforming function transformation function F1 is a logarithmic function according to remarks 2 described visual processing unit.
At this, human visual characteristic is generally logarithm.So if be transformed into the number space, and carry out the processing of picture signal and processing signals, then can be suitable for the processing of visual characteristic.
In visual processing unit of the present invention, can carry out the dynamic range compression that local contrast is strengthened or kept to the high contrast of visual effect.
Remarks 4 described visual processing unit are that inverse transform function F2 is the gamma correction function according to remarks 2 described visual processing unit.
Usually, to picture signal, the gamma characteristic according to picture signal being carried out the machine of input and output imposes gamma correction by the gamma correction function.
In visual processing unit of the present invention, by transforming function transformation function F1, remove the gamma correction of picture signal, also can handle according to linear characteristic.Like this, recoverable is optic fuzzy.
Remarks 5 described visual processing unit are that signal operation mechanism has: signal space mapping device, intensive treatment mechanism, inverse transformation mechanism according to each the described visual processing unit in the remarks 2~6.The signal space changeable mechanism carries out conversion to the signal space of picture signal and processing signals.Intensive treatment mechanism carries out intensive treatment for picture signal after the conversion and the difference signal between the processing signals after the conversion.Inverse transformation mechanism for picture signal after the conversion and the additive signal between the difference signal after the intensive treatment, carries out the inverse transformation of signal space, and output signal is exported.
In visual processing unit of the present invention, the signal space mapping device uses transforming function transformation function F1, carries out the conversion of the signal space between picture signal and the processing signals.Intensive treatment mechanism uses and strengthens function F 3, carries out intensive treatment for picture signal after the conversion and the difference signal between the processing signals after the conversion.Inverse transformation mechanism uses inverse transform function F2, for picture signal after the conversion and the additive signal between the difference signal after the intensive treatment, carries out the inverse transformation of signal space.
Remarks 6 described visual processing unit possess: input signal processing mechanism and signal operation mechanism.The input signal processing mechanism, it carries out spatial manipulation for the picture signal that is transfused to, and processing signals is exported.Signal operation mechanism based on the computing that the ratio of picture signal and processing signals is strengthened, exports output signal.
In visual processing unit of the present invention, for example, the ratio between picture signal and the processing signals, the clear composition of presentation video signal.Therefore, for example can strengthen the visual processing of clear composition.
Remarks 7 described visual processing unit are according to remarks 6 described visual processing unit, signal operation mechanism, and it is exported output signal based on the computing of further carrying out the dynamic range compression of picture signal.
In visual processing unit of the present invention, for example, can one side be strengthened by the clear composition of the picture signal of the ratio value representation between picture signal and the processing signals, can carry out the compression of dynamic range simultaneously.
Remarks 8 described visual processing unit are according to remarks 6 or 7 described visual processing unit, signal operation mechanism, be value A, the value B of processing signals, dynamic range compression function F 4, reinforcement function F 5 for picture signal, based on mathematical expression F4 (A) * F5 (A/B), calculate the value C of output signal.
At this, the value C of output signal is expressed as follows.That is, the division amount (A/B) between the value A of picture signal and the value B of processing signals is represented for example clear signal.And F5 (A/B) represents for example amount of reinforcement of clear signal.This expression with the value A of picture signal and the value B of processing signals, be transformed into the number space, each difference is carried out the processing of intensive treatment equivalence, be suitable for the intensive treatment of visual characteristic.
In visual processing unit of the present invention,, can strengthen local contrast simultaneously on one side can carry out the compression of dynamic range as required.
Remarks 9 described visual processing unit are the compressions according to remarks 8 described dynamic ranges, and dynamic range compression function F 4 is direct proportion functions of proportionality coefficient 1.
In visual processing unit of the present invention, can strengthen contrast to bright from the dark portion of picture signal.This contrast is strengthened, and is the intensive treatment that is suitable for visual characteristic.
Remarks 10 described visual processing unit are that dynamic range compression function F 4 is monotone increasing functions according to remarks 8 described visual processing unit.
In visual processing unit of the present invention, carry out dynamic range compression on one side can use as the dynamic range compression function F 4 of monotone increasing function, can realize the local contrast reinforcement simultaneously.
At remarks 11 described visual processing unit is that dynamic range compression function F 4 is the functions that raise up according to remarks 10 described visual processing unit.
In visual processing unit of the present invention, use as the dynamic range compression function F 4 of the function that raises up and carry out dynamic range compression, can strengthen local contrast simultaneously.
Remarks 12 described visual processing unit are that dynamic range compression function F 4 is power functions according to remarks 8 described visual processing unit.
In visual processing unit of the present invention, carry out the conversion of dynamic range on one side can use as the dynamic range compression function F 4 of power function, simultaneously local contrast is strengthened.
Remarks 13 described visual processing unit are according to remarks 12 described visual processing unit, the index of the power function in the dynamic range compression function F 4, based on carry out image show when as a comparison the desired value of degree the target contrast value and carrying out determining as the actual contrast value of the contrast value in display environment when image shows.
At this, so-called target contrast value is the desired value of carrying out the contrast when image shows, for example is, the value that determines by the dynamic range of carrying out the display unit that image shows etc.So-called actual contrast value, be carry out image show when contrast value in display environment, for example, by existing under the situation of surround lighting, the contrast of the image that display unit is shown and the value that determines etc.
In visual processing unit of the present invention, by dynamic range compression function F 4, can will have the picture signal of the dynamic range that equates with the target contrast value, dynamic range compression becomes the dynamic range relative with the actual contrast value.
Remarks 14 described visual processing unit are according to each the described visual processing unit in the remarks 8~13, and strengthening function F 5 is power functions.
In visual processing unit of the present invention, use reinforcement function F 5 as power function, can strengthen local contrast, can visually carry out the conversion of dynamic range.
Remarks 15 described visual processing unit are according to remarks 14 described visual processing unit, strengthen the index of the power function in the function F 5, based on carry out image show when as a comparison the desired value of degree the target contrast value and carrying out when image shows, determine as the value of the actual contrast of the contrast value in display environment.
In visual processing unit of the present invention, can use reinforcement function F 5 as power function, strengthen local contrast, can visually carry out the conversion of dynamic range.
Remarks 16 described visual processing unit are according to remarks 14 or 15 described visual processing unit, when the value A of picture signal is bigger than the value B of processing signals, strengthen the index of the power function in the function F 5, are the dull values that reduces of value A of relative picture signal.
In visual processing unit of the present invention, the reinforcement of the local contrast of the middle and high brightness part of the concerned pixel higher than surrounding pixel brightness is weakened.Therefore, can be suppressed at by in the image after the visual processing, so-called anti-white.
At remarks 17 described visual processing unit, make according to remarks 14 or 15 described visual processing unit, in the value A of picture signal value B hour, strengthen the index of the power function in the function F 5 than processing signals, be the dull value that increases of value A of relative picture signal.
In visual processing unit of the present invention, in picture signal, can make than in the lower concerned pixel of surrounding pixel brightness, the reinforcement of the local contrast of low-light level part weakens.Therefore, can be suppressed at by in the image after the visual processing so-called cracking down upon evil forces.
Remarks 18 described visual processing unit are according to remarks 14 or 15 described visual processing unit, when the value A of picture signal is bigger than the value B of processing signals, strengthen the index of the power function in the function F 5, are the dull values that increases of value A of relative picture signal.
In visual processing unit of the present invention, in picture signal, can make than in the higher concerned pixel of surrounding pixel brightness, the reinforcement of the local contrast of low-light level part weakens.Therefore, can be suppressed at by in the image after the visual processing deterioration of so-called S/N ratio.
Remarks 19 described visual processing unit are according to remarks 14 or 15 described visual processing unit, strengthen the index of the power function in the function F 5, are the absolute values of the difference between the value B of the value A of relative picture signal and processing signals, the dull value that increases.
At this, as the dull value that increases of the absolute value of the difference between the value B of the value A of relative picture signal and processing signals, the ratio that also can be defined as between the value B of the value A of picture signal and processing signals approaches 1 more, then increases more.
In visual processing unit of the present invention, can be in picture signal, strengthen especially with the local contrast in the less concerned pixel of the luminance difference of neighboring pixel, can excessively strengthen with the local contrast in the bigger concerned pixel of the luminance difference of neighboring pixel not in picture signal.
Visual processing unit in remarks 20 is according to each the described visual processing unit in the remarks 14~19, strengthens the maximum of function F 5 or at least one side in the minimum value, is limited within the limits prescribed.
In visual processing unit of the present invention, just the amount of reinforcement of local contrast is limited in the suitable scope.
According to remarks 21 described visual processing unit is that signal operation mechanism has: intensive treatment mechanism, output processor structure according to remarks 8 described visual processing unit.Intensive treatment mechanism is for the division processing signals of picture signal after divided by processing signals carried out intensive treatment.The output processor structure based on picture signal be reinforced division processing signals after the processing, is exported output signal.
In visual processing unit of the present invention, intensive treatment mechanism for the division processing signals after using processing signals divided by processing signals picture signal, uses reinforcement function F 5 to carry out intensive treatment.The output processor structure based on picture signal and division processing signals, is exported output signal.
Remarks 22 described visual processing unit are according to remarks 21 described visual processing unit, and the output processor structure carries out multiplication process to picture signal and the division processing signals that is reinforced after the processing.
In visual processing unit of the present invention, dynamic range compression function F 4 is direct proportion functions of direct proportion coefficient 1 for example.
Remarks 23 described visual processing unit are according to remarks 21 described visual processing unit, the output processor structure, comprise the DR compressing mechanism that carries out dynamic range (DR) compression for picture signal, to by the picture signal after the DR compression and the division processing signals after being reinforced processing carry out multiplication process.
In visual processing unit of the present invention, the DR compressing mechanism, it uses dynamic range compression function F 4 to carry out the dynamic range compression of picture signal.
Remarks 24 described visual processing unit are according to each the described visual processing unit in the remarks 8~23, also possess: the 1st mapping device and the 2nd mapping device.The 1st mapping device is transformed into the 2nd prescribed limit with the input image data of the 1st prescribed limit, and as view data.The 2nd mapping device is transformed into the scope of the 4th regulation with the output signal of the 3rd prescribed limit, and as output image data.The scope of the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine.The scope of the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in display environment when image shows.
In visual processing unit of the present invention, can be on one side to the dynamic range of integral image be compressed to actual contrast that the existence because of surround lighting reduces to till, can keep the target contrast value partly simultaneously.Therefore, improve by the visual effect of the image after the visual processing.
Remarks 25 described visual processing unit are that dynamic range compression function F 4 is the picture signals with the 2nd prescribed limit according to remarks 24 described visual processing unit, are transformed into the function of the output signal of the 3rd scope of stipulating.
In visual processing unit of the present invention, by dynamic range compression function F 4, till dynamic range compression to the 3 prescribed limits with integral image.
Visual processing unit shown in the remarks 26 is according to the visual processing unit shown in remarks 24 or 25, the 1st mapping device, the minimum value that the minimum value and the maximum of the 1st prescribed limit is transformed into the 2nd prescribed limit respectively and peaked each.The 2nd mapping device, minimum value that the minimum value and the maximum of the 3rd prescribed limit is transformed to the 4th prescribed limit respectively and maximum each.
Remarks 27 described visual processing unit are according to the visual processing unit shown in the remarks 26, and the conversion in the 1st mapping device and the 2nd mapping device is respectively linear conversion.
Remarks 28 described visual processing unit are according to each the described visual processing unit in the remarks 24~27, further possess the set mechanism that the 3rd prescribed limit is set.
In visual processing unit of the present invention,, can set the 3rd prescribed limit according to the display environment of the display unit that image is shown.Therefore, can carry out the correction of surround lighting more suitably.
Visual processing unit shown in the remarks 29 is according to the visual processing unit shown in the remarks 28, set mechanism, and it comprises: storing mechanism, its dynamic range to the display environment that carries out image and show is stored; With measure mechanism, its brightness to the surround lighting in display environment when carrying out image and showing is measured.
In visual processing unit of the present invention, the brightness of surround lighting is measured, can be according to the measured brightness and the dynamic range of display unit, decision actual contrast value.
Remarks 30 described visual processing unit are that set mechanism comprises measuring mechanism according to remarks 29 described visual processing unit, when it shows the black level of display unit in display environment that carries out image and show and the brightness of white level when showing measure.
In visual processing unit of the present invention, in the time of can showing the black level in the display environment and the brightness of white level when showing measure decision actual contrast value.
Remarks 31 described visual processing unit possess input signal processing mechanism and signal operation mechanism.The input signal processing mechanism, it carries out spatial manipulation to the picture signal of being imported, and processing signals is exported.Signal operation mechanism, it, is exported output signal to the computing that the difference between picture signal and the processing signals is strengthened based on the value according to picture signal.
In visual processing unit of the present invention, for example can be according to the value of picture signal, the clear composition as the picture signal of the difference between picture signal and the processing signals is strengthened.Therefore, can carry out suitable reinforcement from dark portion to the bright portion of picture signal.
Remarks 32 described visual processing unit are according to remarks 31 described visual processing unit, signal operation mechanism, based on for by the value after strengthening computing and being reinforced, add the computing of picture signal being carried out the value after the dynamic range compression, output signal is exported.
In visual processing unit of the present invention, for example can be on one side according to the value of picture signal, the clear one-tenth of picture signal graded strengthens, simultaneously dynamic range is compressed.
Remarks 33 described visual processing unit are according to remarks 31 or 32 described visual processing unit, signal operation mechanism, be value A, the value B of processing signals, amount of reinforcement adjustment function F 6, reinforcement function F 7, dynamic range compression function F 8 for picture signal, based on mathematical expression F8 (A)+F6 (A) * F7 (A-B), calculate the value C of output signal.
At this, the value C of output signal is expressed as follows.That is, the difference (A-B) between the value A of picture signal and the value B of processing signals is represented for example clear signal.And F7 (A-B) represents for example amount of reinforcement of clear signal.And then amount of reinforcement is adjusted function F 6 by amount of reinforcement, be adjusted according to the value A of picture signal, as required with carried out the picture signal addition after the dynamic range compression.
In visual processing unit of the present invention,, keep from the contrast of dark portion to bright portion yet can reduce amount of reinforcement etc. though for example the value of picture signal A is bigger.In addition, under the situation of carrying out dynamic range compression, also can keep from the local contrast of dark portion to bright portion.
Visual processing unit shown in the remarks 34 is that dynamic range compression function F 8 is direct proportion functions of proportionality coefficient 1 according to remarks 33 described visual processing unit.
In visual processing unit of the present invention, can strengthen contrast equably from dark portion to the bright portion of picture signal.
Remarks 35 described visual processing unit are according to remarks 33 described visual processing unit, and dynamic range compression function F 8 is dull functions that increase.
In visual processing unit of the present invention, can use dynamic range compression function F 8 as monotone increasing function, carry out dynamic range compression, can keep local contrast simultaneously.
Remarks 36 described visual processing unit are that dynamic range compression function F 8 is the functions that raise up according to remarks 35 described visual processing unit.
In visual processing unit of the present invention, can use dynamic range compression function F 8 as the function that raises up, carry out dynamic range compression, the while can be kept and be possessed contrast.
Remarks 37 described visual processing unit are that dynamic range compression function F 8 is power functions according to remarks 33 described visual processing unit.
In visual processing unit of the present invention, can use as the dynamic range compression function F 8 of power function and carry out the conversion of dynamic range, can keep local contrast simultaneously.
Remarks 38 described visual processing unit are that signal operation mechanism has intensive treatment mechanism and output processor structure according to remarks 33 described visual processing unit.Intensive treatment mechanism for the difference signal between picture signal and the processing signals, carries out the corresponding intensive treatment of pixel value of picture signal.The output processor structure based on picture signal be reinforced difference signal after the processing, is exported output signal.
In visual processing unit of the present invention, intensive treatment mechanism uses by the reinforcement function F 7 behind the amount of reinforcement adjustment function F 6 adjustment amount of reinforcements and carries out intensive treatment.The output processor structure based on picture signal and difference signal, is exported output signal.
Remarks 39 described visual processing unit are according to remarks 38 described visual processing unit, and the output processor structure carries out addition process to picture signal and the difference signal that is reinforced after the processing.
In visual processing unit of the present invention, dynamic range compression function F 8 is direct proportion functions of proportionality coefficient 1 for example.
Remarks 40 described visual processing unit are according to remarks 38 described visual processing unit, the output processor structure, comprise the DR compressing mechanism that carries out dynamic range (DR) compression for picture signal, to by the picture signal after the DR compression and the difference signal after being reinforced processing carry out addition process.
In visual processing unit of the present invention, the DR compressing mechanism uses dynamic range compression function F 8, carries out the dynamic range compression of picture signal.
Visual processing unit shown in the remarks 41 possesses: input signal processing mechanism and signal operation mechanism.The input signal processing mechanism carries out spatial manipulation for the picture signal that is transfused to, and processing signals is exported.Signal operation mechanism based on for the value of strengthening the difference between picture signal and the processing signals, adds the computing of picture signal being carried out the value after the gray correction, and output signal is exported.
In visual processing unit of the present invention, for example picture signal and processing signals is poor, the clear composition of presentation video signal.And the reinforcement of clear composition and the gray correction of picture signal are independently carried out.Therefore, no matter the gray correction amount of picture signal how, all can be carried out the reinforcement of certain clear composition.
Remarks 42 described visual processing unit are according to remarks 41 described visual processing unit, signal operation mechanism, be based on value A, the value B of processing signals, reinforcement function F 11, gray correction function F 12 for picture signal, based on mathematical expression F12 (A)+F11 (A-B), calculate the value C of output signal.
At this, the value C of output signal is expressed as follows.That is, the difference (A-B) between the value A of picture signal and the value B of processing signals is represented for example clear signal.And F11 (A-B) represents for example intensive treatment of clear signal.And then expression will and be reinforced clear signal addition after the processing by the picture signal after the gray correction.
In visual processing unit of the present invention,, gray correction strengthens no matter how, all can carrying out certain contrast.
Remarks 43 described visual processing unit are that signal operation mechanism has intensive treatment mechanism and addition process mechanism according to remarks 42 described visual processing unit.Intensive treatment mechanism to the difference signal between picture signal and the processing signals, carries out intensive treatment.Addition process mechanism to being carried out addition process by picture signal after the gray correction and the difference signal that is reinforced after the processing, and exports as output signal.
In visual processing unit of the present invention, intensive treatment mechanism uses reinforcement function F 11 to carry out intensive treatment for difference signal.Addition process mechanism uses picture signal after gray correction function F 12 is carried out gradation correction processing, carries out addition process with difference signal after being handled by gray scale.
Remarks 44 described visual processing methods possess: the 1st shift step, signal operation step and the 2nd shift step.The 1st shift step is transformed into the input image data of the 1st prescribed limit in the scope of the 2nd regulation, and as picture signal; The signal operation step, based on comprising picture signal is carried out the computing of dynamic range compression or strengthened picture signal and picture signal is carried out the computing of at least one side in the computing of the ratio between the processing signals after the spatial manipulation, with the output signal output of the 3rd prescribed limit; The 2nd shift step, the output signal of the 3rd prescribed limit is transformed in the 4th scope of stipulating and as output image signal, the scope of the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine, the scope of the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in the display environment when image shows.
In visual processing method of the present invention, for example can compress till the actual contrast value that the existence because of surround lighting reduces the dynamic range of integral image on one side, can keep the target contrast value in the part simultaneously.Therefore, improve by the visual effect of the image after the visual processing.
Remarks 45 described visual processing unit possess the 1st mapping device, signal operation mechanism and the 2nd mapping device.The 1st mapping device is transformed into the input image data of the 1st prescribed limit in the scope of the 2nd regulation, and as picture signal; The signal operation step, based on comprising picture signal is carried out the computing of dynamic range compression or strengthened picture signal and picture signal is carried out the computing of at least one side in the computing of the ratio between the processing signals after the spatial manipulation, with the output signal output of the 3rd prescribed limit; The 2nd shift step, the output signal of the 3rd prescribed limit is transformed in the scope of the 4th regulation, and as output image signal, the scope of the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine, the scope of the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in the display environment when image shows.
In visual processing unit of the present invention, for example can compress till the actual contrast value that the existence because of surround lighting reduces the dynamic range of integral image on one side, can keep the target contrast value in the part simultaneously.Therefore, improve by the visual effect of the image after the visual processing.
Visual handling procedure shown in the remarks 46 is to be used to make computer to carry out the visual handling procedure of visual processing, makes computer carry out the visual processing method that possesses the 1st shift step, signal operation step, the 2nd shift step.
The 1st shift step is transformed into the scope of the 2nd regulation with the input image data of the 1st prescribed limit, and as picture signal.The signal operation step, based on comprising picture signal is carried out the computing of dynamic range compression or strengthened picture signal and picture signal is carried out the computing of at least one side in the computing of the ratio between the processing signals after the spatial manipulation, with the output signal output of the 3rd prescribed limit.The 2nd shift step is transformed into the scope of the 4th regulation with the output signal of scope of the 3rd regulation, and as output image data.The scope of the 2nd regulation, be based on carry out image show when as a comparison the target contrast value of the desired value of degree determine.The scope of the 3rd regulation is based on and carries out determining as the actual contrast value of the contrast value in display environment when image shows.
In visual handling procedure of the present invention, for example can compress till the actual contrast value that the existence because of surround lighting reduces the dynamic range of integral image on one side, can keep the target contrast value in the part simultaneously.Therefore, improve by the visual effect of the image after the visual processing.