After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 55598 - blended shrunk display mode
blended shrunk display mode
Status: RESOLVED DUPLICATE of bug 76096
Product: GIMP
Classification: Other
Component: User Interface
git master
Other All
: Normal enhancement
: ---
Assigned To: GIMP Bugs
Daniel Egger
Depends on:
Blocks:
 
 
Reported: 2001-06-01 21:45 UTC by David Monniaux
Modified: 2004-12-22 21:47 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description David Monniaux 2001-06-01 21:45:40 UTC
Currently, shrunk display is made by sub-sampling the image (taking one
pixel every n pixels). When applied on high resolution images containing
fine details (lines drawn with a small pen, small text), this makes some
ugly artefacts.

It would be nice to have a "blended" mode that would use the average of the
pixels represented by a single display pixel. This should not be too slow,
especially with vector processing (MMX/Altivec).
Comment 1 Austin Donnelly 2001-06-02 11:25:44 UTC
Yes, the render code could interpolate.  There is scope for
much interesting work here.  However, the decision to use
MMX should not be taking too hastily.  Here are my arguments 
against MMX

From my gimp-developer posting:
---------------------------
An MMX rewrite also has numerous disadvantages.  You'd still need to
keep a C version lying around so that non-Intel platforms can work.
So that means you now have two pieces of code floating around which
should do the same thing, and need to be kept in step with each other.
This is a maintenance nightmare.  Now instead of needing to know just
C, maintainers need to know both C and Intel MMX assembly.  This
instantly reduces the number of people who can help fix bugs.  Also,
bug reports are complicated by the fact that now display-related bugs
need to say which architecture it was on.  Given the typically low
quality of bug reports (eg "it shows me a garbled image") even just
finding out the architecture is quite hard.

Re-writing the C to use a better algorithm, or tuning it to place
variable in better locations on the stack, would be a much preferable
solution.  That way the algorithm is still just C.  For example, the
algorithm currently uses an integer fixed-point approximation to keep
track of the render error.  Originally I coded it up using floating
point, but it was slightly slower in the subsequent testing I did, so
I re-coded it in fixed point.  I tested on a Pentium 133 and an Alpha
21064, and both came ahead using the integer fixed-point algorithm.
Maybe that trade off is different with today's common CPUs, I haven't
looked into it recently.

I suggest you read "The Practice of Programming" by Brian W.
Kernighan,
Rob Pike.  Addison-Wesley, 1999   ISBN: 020161586X.
In particular, I'm thinking of the chapter on performance tuning.

In summary: find out where the bottleneck actually is (by profiling)
before jumping in and micro-optimising the wrong thing.

To answer your original question, the display code is in
app/image_render.c   Note that not all the render functions were
reachable last time I looked at the code in detail.  This is in 1.2.1.
---------------------------
Comment 2 Sven Neumann 2003-06-02 18:38:15 UTC
It seems that bug #76096 talks about the same thing. Since it has more
details, I will mark this report as a duplicate of #76096.

*** This bug has been marked as a duplicate of 76096 ***