|Subject:||Is there still room for improvement ?|
|Posted by:||fa…@email.com (SuperFly)|
|Date:||Fri, 22 Aug 2003|
I talked with a friend on the current state of data compression. And
he claimed that we won't be seeing any big improvements like the
introduction of jpg and mpg vs gif/fli and the arc,zip,lzh,arj
And that the best compression schemes are so close to perfect that the
only thing that can be improved is using more memory and cpu power.
When I thought about this, I had to agree. For a while fractal and
wavelet coders seemed to be an option that could do much better than
DCT based image coding. But looking back at it the difference
(wavelet) isn't really that much.
And in the field of lossless datacompression the arithmetic style
coders don't seem to leave much room for improvement.
Is this assumption correct and is the work close to being finished so
to speak or are there still area's that leave room for big
improvements. If so. Has anyone information on those area's? ( I can
imagine that companies like IBM, Microsoft, etc, are working on next
generation schemes.) Personally I think that an area like pattern
recognition can be improved, because a lot of the current compression
schemes still miss patterns that are obvious to a human but not yet to
Any thoughts and/or links on this ?