What algorithms to use for image downsizing?
What is faster?
What algorithm is performed for image resizing ( specially downsizing from big 600×600 to super small 6×6 for example) by such giants as flash and silver player, and html5?
Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.
Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
Bilinear is the most widely used method and can be made to run about as fast as the nearest neighbor down-sampling algorithm, which is the fastest but least accurate.
The trouble with a naive implementation of bilinear sampling is that if you use it to reduce an image by more than half, then you can run into aliasing artifacts similar to what you would encounter with nearest neighbor. The solution to this is to use an pyramid based approach. Basically if you want to reduce 600×600 to 30×30, you first reduce to 300×300, then 150×150, then 75×75, then 38×38, and only then use bilinear to reduce to 30×30.
When reducing an image by half, the bilinear sampling algorithm becomes much simpler. Basically for each alternating row and column of pixels: