Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Sep 3, 2019.
I have a boat that dives and floats using solar power.
A pixel is measured as being square in the height and width are equal, unless you have a Pixel Aspect Ratio applied to the pixel itself. A pixel is not a defined size, unless you actually measure the pixel, or the manufacturer tells you what the size of the pixel is - but that is not your point (I think). Power of Two is used for pixels, because they must have width and height - and they are square.
The number calculation notation is nk, with the n being a multiple of the k.
From your own link:
"For convenience, pixels are normally arranged in a regular two-dimensional grid"
Pixel structures, on variety of panel types:
A pixel stay in a point. You cannot measure a point with dimensions like width and height. All the rest of your argument is invalid.
That's a projection of many pixels in a 2-dimensional coordinate space, which is used for texel representations. Still without physical dimension though (you cannot touch a pixel). Please note that a texel is not a pixel (despite Microsoft decided to name that abomination - by semantic point of view - called "pixel shader". "Texel shader" or even better "Fragment shader" would be the right name, yes Khronos group did it right)..
Those are not pixels, those are different liquid crystals layouts used in TFT derived panels.
But if you all cannot trust me, (or just you cannot understand a simple - for dimmies like - Wikipedia page), at least trust what Alvy Ray Smith say: http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
He is not talking about display panels when he says that, did YOU read that document?
Pixels (unless they have a PAR) are square, it even says that in the document!
And, with regards to texel representation, what on earth do you think a computer monitor is doing? Of course it is a two dimensional grid, because (again, your PDF explains this quite clearly) they are arranged in rows and columns!
Good grief, you are making this so much harder than it is...
Let me put this in a language you are using: Pixels are one row height by one column wide in size, whatever the measurement of those rows and columns are. This means that because they are two dimensional grids (when projected/displayed) they are square!
However, this is about the classifications of what 1,024 multiplied by 8 actually is when presenting the information to consumers (business or personal) - this is what the classification is for, and it is this which is inaccuratly being communicated as a result of the classification.
There is one simple way to push 8K traffic over slow Internet. Lets say 5 people have separate 8K TVs, but only one 80 Mbps broadband channel to the house and everyone wants to watch different 8K movie during the same time. Solution would be to equip all 8K TVs with viewer/eye tracking add-on. TV will inform transmitter about all viewers who are too far from the TV, and transmitter will drop down stream quality to 4K, 2K or 1K for them. If viewer is very close to TV then eye tracking mode could be enabled. Video image could be divided into 16 blocks, with possibility to stream each block with different quality. Eye tracking device could report coordinates where viewer eyes are focuses at (lets say block [2:3]). Transmitter will be able send that block in 8K quality, surrounding blocks in 4K, 2K or 1K quality. Due to the eye retina structure, viewer won't see the difference. Everything will look 8K. And this is how you can provide different content at 8K perceived quality to multiple users over a single 80Mpps copper broadband line. Due to this simple trick, you could experience 8K quality over 20Mpps line, which is way below required bandwidth for 8K. Of course source video stream file must be encoded appropriately and there will a little bit more work for transmitter, but I can bet it will be many times cheaper than laying miles of optical cable for 8K infrastructure.
I like your way of thinking, but that isn't realistically possible when streaming. There would be enough of a latency issue with that even if the file was stored locally. The delay between you and the server would make it unusable. Considering that idea requires computer vision, the amount of processing power to handle that might as well be used for a not-so-lossy but more compressed video format.
Honestly though, I have a hard time imagining 8K streaming is going to be a commodity for a while. Even 4K is still pretty limited, and most people are totally satisfied with 1080p. In a lot of cases, you're better off watching 1080p or 1440p vs a highly compressed and lossy 4K.
To work off of your idea - for most media, typically only 1/3 of the total pixel area is actually used to highlight the subject (that includes some vertical space too). So, maybe one option would be to have better quality in that center 1/3 of the video, whereas the rest is more lossy.
let's not forget those 60 gig 8k videos.....even my data-warehousing ass isn't looking forward to that.
i'll be pushing up Daisy's before they broadcast in full HD in the UK. 1080p. So all this shite about 4k and 8k is a waste of time for the broadcasting company