[教學] 【轉貼】8-bit vs 10-bit panels

Source: http://www.avsforum.com/forum/16 ... s-10-bit-panel.html

I have worked in the electronics feild specifically with tv for the pst 5 years and now more than ever this question is popping up. The simple answer is as of now it makes little to no difference if you have a 10 bit or 8 bit lcd. The reason is because there is no 10 bit source out there to feed this 10 bit panel, and when u send 8 bit to a 10 bit tv the tv has to do more processing which everyone knows more thinking done by the tv means a lower quality picture. With blu-ray now being the last form of mechanical source other then gaming there will be no source out there in the future either, the next generation of movies and such will be internet based and downloadable content, with compression rates of 13-1 and getting bigger online there is no need for 10 bit and satelite and cable company's have said they will not increase there bandwith wich means they cannot fit a 10 bit signal. You may also notice the biggest player in lcd Samsung has gone with an 8 bit panel in all of their new tvs and i can tell you i have seen a 8 bit samsung next to a 10 bit sony xbr on a panel comparison using an HQV disc and there is no real difference samsungs processing makes up the difference in colour recreation.So dont get caught up in the numbers game just see if you like the picture

but, I'll chime in anyway. I am a professional mutliplexer and CODEC developer working everyday with 10 and 12 bit video sources. To begin with, killswitch_19 is entirely correct on nearly all points, but there is in fact a time where 10-bits is an improvement over 8-bits even when the source itself is only 8-bits.

To start, 8-bit means that for red(r), green(g), and blue(b), the values 0 to 255 can be represented. For 10-bit, the rgb values can be from 0 to 1023. This means that per component, 10-bit is 4 times as detailed as 8-bit. Therefore, if you had a raw image with 10-bit depth, it would have a color palette 64 times as large (4x4x4=64) to represent the image on your screen. In the case of high definition video, with the exception of footage from EXTREMELY high end cameras (starting with the RedOne cinema camera and upward), you will never come across media of this scale. The reason is, it would require a signal of 3.125 gigabits per second to properly transmit this signal. TV networks with half million dollar cameras broadcast sports from the arena to the network at less than 1/3rd this speed, with quality loss.

The piddly 50mbit/sec that you get from high definition formats would almost definitely not benefit from higher bit depths as it already is stretching itself quite far by employing 150:1 compression to begin with.

The case where 10-bits for a consumer screen makes a big difference is in upscaling video from a lower resolution. Each color channel (red, green, blue), before scaling is multiplied by 4 to make it a 10-bit value to begin with. Then the image is scaled up by finding values inbetween "neighboring" pixels to jam in-between each pixel.

If you work in 8-bit, and you have a pixel with the value 1 and the pixel next to it is the value 2, then if you were to double the size of the image, the pixel inserted inbetween is calculated by adding the two values together, then dividing them in half. So, 1+2 = 3 / 2 = 1.5.

1.5 is not a valid pixel value. So, it would become either 1 or 2 since scaling systems are generally smart enough to use a more complex calculation that takes other pixels into account as well.

Using the same values, in 10-bit, therefore multiplying the 1 and 2 each by 4, we get the values 4 and 8 to start with instead. So 4 + 8 = 12 / 2 = 6. 6 is obviously a valid value, so now instead of the 8-bit version which would be either 1.1.2 or 1.2.2, we have a higher quality scaling of 4.6.8 instead.

The result is that the "sub-pixel-sampling" or the pixels in-between the encoded pixels are of a higher precision. The visible result, in special circumstances (generally you saw it more during the earlier jumps from 5 to 6 pixels per channel) is that color banding in the picture is much less.

The quality is even further improved when linear and temporal color scaling is taken into consideration. This is when the previous pictures and pixels around each pixel are used to help scale the current picture. So the scaler has as much data as possible to help it guess the new value of each pixel when scaling.

I know this is a bit too detailed for a forum like this, but I felt like jumping in.

To summarize, depending on the quality of the processor being used for scaling on the TV, it is possible to greatly improve the quality of a SD, 720p, even a 1080i (during the deinterlacing phase) picture on a 1080p screen using 10-bit color channel resolution since detail is filled in by guessing numbers for pixels that were not represented on the source media.

That being said, going from 16.7 million colors per pixel to a little over 1 billion colors per pixel is not as earth-shaking as it may sound. Thanks to motion in pictures, it's not likely to make a big enough different to matter, especially in the case of back-lit screens, but that's an entirely separate discussion.

藍光暫時都仲係8bit

TOP

8bit mon都未普及啦,暈

TOP

好多英文。。。睇左2句投降

TOP

多謝分享,學到野!

TOP

REC.2020定到個色域太寬,所以色深無得唔上10-BIT/12-BIT
而且高色深係顯示HDR影像都有幫助

TOP