brian@ucselx.sdsu.edu (Brian Ho) (11/05/90)
Hello out-there, I am some problems which you may give me a hand. I am trying to scale some bitmap images, those images are characters (e.g. A B a b 1 2 ..etc) which have the same font but with different sizes. The goal of the scaling transformation is to normalized those characters into the same size, 20pt. Currently, I am using a very simple transformation algorithm, which is looking for the height/width from the original image, then determine the scaling factor, sx and sy. And apply the scaling factor to the old image. The result is not as good as we expected. Somehow, characters with different sizes resulting different shapes after the transformation, and it is no way to tell they are actually has the same font. The goal we are trying to achieve is to retain the shape/features of the character after transformation. My questions are: 1> Does anyone has a better algorithm/reference for scaling tranformation which retain the features of the original image (in my case - characters with different sizes). 2> I have heard from my advisor that there is a some sort of mathematic model/equations to create characters in different size. Operator can simply type in the size of the desired character(s). And the program will generate the character by itself. I am wondering if anyone has heard of this model, and can you please give me some information on that subject. Thank you very much...... Please send reponses by e-mall : brian@ucselx.sdsu.edu brian@yucatec.sdsu.edu Brian Ho. Following are the results from my scaling transformation algorithm. The target image is 20pt. Characters are in Times font. original character = 8 pts 00000000000000000000 00000000000011111110 00000000000011111110 00000000000011111110 00000000000011111110 00000000111100011110 00000000111100011110 00000000111100011110 00000111111111111110 00000111111111111110 00000111111111111110 00000111111111111110 00000111000000011110 00000111000000011110 00000111000000011110 01111111000000011110 01111111000000011110 01111111000000011110 01111111000000011110 00000000000000000000 original character = 12 pts 00000000000000000000 00000000000011111000 00000000000011111000 00000000000011111000 00000000000011111000 00000000001100011000 00000000001100011000 00000000110000011000 00000000110000011000 00000000110000011000 00000000110000011000 00000011111111111000 00000011111111111000 00000011000000011000 00000011000000011000 00011100000000011000 00011100000000011000 01111111000011111110 01111111000011111110 00000000000000000000 original character = 24 pts 00000000000000000000 00000000000000011000 00000000000001111000 00000000000001111000 00000000000010011000 00000000000110011000 00000000000100011000 00000000001100011000 00000000011000011000 00000000010000011000 00000000110000011000 00000001100000011000 00000001111111111000 00000011000000011000 00000010000000011000 00001110000000011000 00011100000000011000 00111100000000011000 01111110000011111110 00000000000000000000
josef@nixdorf.de (Moellers) (11/08/90)
In <1990Nov5.022837.15506@ucselx.sdsu.edu> brian@ucselx.sdsu.edu (Brian Ho) writes: >Hello out-there, > I am some problems which you may give me a hand. I am trying to scale > some bitmap images, those images are characters (e.g. A B a b 1 2 ..etc) > which have the same font but with different sizes. The goal of the scaling > transformation is to normalized those characters into the same size, 20pt. > Currently, I am using a very simple transformation algorithm, which is > looking for the height/width from the original image, then determine the > scaling factor, sx and sy. And apply the scaling factor to the old image. > The result is not as good as we expected. Somehow, characters with different > sizes resulting different shapes after the transformation, and it is > no way to tell they are actually has the same font. The goal we are > trying to achieve is to retain the shape/features of the character after > transformation. [ rest deleted ] I tried an approach described in the "Weekly automated posting": When scaling from a larger font size to a smaller one (the reverse doesn't work), I first determine the dimensions of the smaller font. Example: I want to scale a font from 30 points to 20 points. A character measuring 50x40 pixels is scaled to 33x27 pixels. Next, I loop through the destination pixels, regarding each pixel as a square portion of the whole character. In scaling, each destination pixel is a mapping of part of the source character. I determine the exact area that the pixel under examination is mapped from and calculate how much "black" is mapped from the source into the destination pixel. This number (an FP number) ranges from 0 to (src-pt-size - dst-pt-size)^2 in the example given above this number may range from 0 to 2.25. This number is the divided by it's upper bound, yielding a number from 0 to 1 (inclusive). One can now use this number to provide anti-aliasing or one can use a threshold to determine if "enough" black is mapped into this pixel to make it black. My program, which is too crude to be published, scales an entire HPLJ-softfont from a given source point size to a given destination point size using an (optional) threshold as described above. It is very crude, as it first converts each character from the HPLJ-bitmap into an ASCII representation (using '*' and ' '), scaling that array and then converting back into a bitmap. The program even takes care of the various sizes encoded in the headers. Hope this helps. BTW I don't know nothing about any mathematical model. PS Don't send a REPLY, as the address recorded in the header of this posting is wrong. Please use the address given below! -- | Josef Moellers | c/o Siemens Nixdorf Informatonssysteme AG | | USA: mollers.pad@nixdorf.com | Abt. PXD-S14 | | !USA: mollers.pad@nixdorf.de | Heinz-Nixdorf-Ring | | Phone: (+49) 5251 104662 | D-4790 Paderborn |
lance@motcsd.csd.mot.com (lance.norskog) (11/10/90)
I'd like to suggest another approach to scaling bitmaps. I haven't seen it mentioned anywhere but I'm sure someone smarter than me thought it up first. Perceptron Scaling This scheme only works for scaling in increments of small whole numbers. In essence, you build a library of bit map pairs, and hunt for matches. For example, if you are scaling up by 2, you have a complete set of input output pairs: * - * * - - * * * * * * - - * * - - - - * * * * - - - - - - - - - - - - - - - - etc. to which you compare each incoming square of bits. The imaginative can come up with various fast searching algorithms. To scale an image four times up you can build a new 2->16 input/output set from the 2->4 set, but you might be better building a separate 16->64 set for better edge and texture preservation. (The preceding statement may be logically bogus). Anyway, you get the idea. The point is to preserve edges and textures via searching for known input sets. No, I haven't implemented it and have no intention of doing so. Lance Norskog
olsen@hpfcdq.HP.COM (John Olsen) (11/14/90)
lance@motcsd.csd.mot.com (lance.norskog) says: >I'd like to suggest another approach to scaling bitmaps. I haven't >seen it mentioned anywhere but I'm sure someone smarter than me >thought it up first. > Perceptron Scaling >This scheme only works for scaling in increments of small whole numbers. >In essence, you build a library of bit map pairs, and hunt for matches. >... You're describing something similar to an item I wrote for the book Graphics Gems (Smoothing Enlarged Monochrome Images). My method uses rules sort of like you describe, but it the technique isn't limited to integer scaling factors. It doesn't particularly apply to the problem described in the base note, because although it will smooth the edges as desired, it will do nothing to change the thickness caused by scaling a small character up to a big character. You could accomplish this thinning with a very large set of rules as described in Lance's previous posting, but it would take a lot of work to create those rules, and they would only work for the scaling factor they were designed for. John M. Olsen, Graphics Technology Division (303)229-6746 olsen@hpfcjo.HP.COM, olsen@hpfcdq.HP.COM Hewlett-Packard, Mail Stop 74, 3404 E. Harmony Road, Ft Collins, CO 80525
doug@eris.berkeley.edu (Doug Merritt) (12/01/90)
In article <1990Nov5.022837.15506@ucselx.sdsu.edu> brian@ucselx.sdsu.edu (Brian Ho) writes: >Hello out-there, > I am some problems which you may give me a hand. I am trying to scale > some bitmap images, those images are characters (e.g. A B a b 1 2 ..etc) This is a member of a well known and very hard set of related problems, which are still generally considered unsolved in the general theoretical case. However, John D. Hobby has recently published some extremely sharp results in this area; although this is not going to be the last word in the subject, it *is* brilliant landmark research. See "Generating Automatically Tuned Bitmaps from Outlines", AT&T Bell Laboratories Computing Science Technical Report No. 148. It will also be published in a journal, although which one and when will not be announced until acceptance, of course. The basic idea seems to be selecting a number of scaling error criteria such as bending distortion, stretching distortion, etc, and doing a least squares fit to minimize error. The paper is quite analytical, not a cookbook, and source code is not available, at least not right now. This paper doesn't seem to be at all well known yet; perhaps I'm letting the cat out of the bag? :-) Doug Doug Merritt doug@eris.berkeley.edu (ucbvax!eris!doug) or uunet.uu.net!crossck!dougm