**ken@turtlevax.UUCP (Ken Turkowski)** (09/16/84)

> given: any number of points in n-space and any number of vectors (of course > also in n-space) associated with each point. > > find: for any given point, x, find its most likely vector. > Sounds like a quantization problem to me. There was a paper in maybe IEEE computer graphics and applications recently about scanning a sample space with Peano curves to get clusters of sample points. If your quantization values ("vectors") are fixed, then all you need is an appropriate metric to determine the distance of each "point" to each "vector". Then pick the closest "vector". I've been using quotes because of the nonstandard usage of the term "vector". A vector is a distance with a direction, and has no root; i.e. it can float around all over the place. I suspect that you really mean a vector rooted at the origin. That brings up the question as to why you didn't use the term "point" for both. -- Ken Turkowski @ CADLINC, Palo Alto, CA UUCP: {amd,decwrl,dual,flairvax,nsc}!turtlevax!ken ARPA: turtlevax!ken@DECWRL.ARPA

**gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>)** (09/19/84)

Judging by your diagram, you appear to want to take the existing data as samples of some vector field and then interpolate to find the field value at a given point. Without some conditions on the vector field there is of course no single solution to this problem. The diagram also shows multiple field values at a point. This implies "least squares" style fitting would be appropriate. To make this work I think there would have to be some implied order to the points. As a rough approximation, how about making the vector at the test point the weighted average of all known vectors, with the weights such that the nearest point (using Euclidean metric) scales the "range" of the weight and the total weights sum up to 1. E.g. except right at a data point, a Gaussian function of distance from the test point could be used.