Hi:
I have been experimenting with using a TI DSP (TMS320F2812, 150MHz 32-bit integer CPU) to drive an x,y scanner pair (Catweazles) to do abstracts. Now I would like to try some static vector drawing frames.
When we speak of 30k points/second (pps), for instance, we mean that the DAC is updated at 30kpps. The number of frames per second thus impacts the maximum possible samples per frame as:
PPF = PPS / FPS , where:
PPF is points per frame
PPS is points per second
FPS is frames per second.
Do laser show designers/software packages allow one to specify a frame rate? Usually I only hear of being able to specify a PPS scan rate. Thus, if for instance, one has a frame with 500 points, you would get:
FPS = 30kpps / 500ppf = 60 fps
Then if you change to a 1000 point frame at the same PPS, you would be rendering that frame at only 30fps.
It must be desirable to maintain the FPS high enough to avoid flicker, so I suspect something like the movie rate of 24fps is a lower limit of what is useful, so that means with a 30kpps scanner, one is limited to <=1250ppf.
Am I on the right track here?
What concerns me more is how one determines the physical spacing of samples in a frame, for instance for a single stroke or curved line segment. Do DAC controllers interpolate between samples and using a higher internal subsampling rate sweep between the points in a smooth fashion?
Otherwise, if for instance you choose samples which are widely spaced compared to the beam diameter, then the scanner will basically step from one point to the next, pausing at each one. This would result in the appearance of a sequence of dots with faint connecting traces rather than a smooth line.
Thus, it seems one of two methods is needed: sampling close enough relative to the beam diameter such that the "dots" are close enough to, combined with the scanner's finite slew rate, produce the appearance of a smooth line. Or second, a subsampling/interpolation algorithm would be needed to smoothly sweep between more widely spaced samples. The first method has the drawback of needing a lot of points for complex frames, while the second requires much more CPU processing but can facilitate sparsely sampled frames (less data).
I am considering to implement such an algorithm on the DSP because it is low on memory to store frames, but rich in processing power to synthesize signal with brute force.
Any input will be of interest. I wish to learn more about how the actual frame rendering process works in typical laser show programs. I suppose I need to find docs on the ILDA file format next as well, and see if I can understand it...