The word "frame" means a bunch of different things, and it's confusing.
A "frame-based" laser interface, to me, means that the basic interface function has these semantics: "Here is a list of points to be scanned. Run this data repetitively until a new list of points is sent, or until it has been repeated n times; if the count expires, then an underrun has happened, so stop the lasers."
This is a pretty common sort of API, but it's unnecessarily complicated. A laser "frame" is a list of points that traces out a desired image once, like a frame in an animation. The number of points in the frame depends on how complex and how accurately rendered it is. If one repeats a frame continuously, then the frame rate - the frequency at which it repeats - will depend on the rate at which points are scanned and on the number of points in the frame. A complicated frame will thus be "more flickery". But people are used to frame rates being fixed, as they are in film and TV, and animators like to think that way too, which means that having a different number of "points" in each "frame" is totally wrong from an animation point of view. A frame is just an arbitrarily-chosen set of points, and to play those points back at a constant frame rate requires that each frame be the same size.
But a high-quality piece of laser output hardware should be able to produce output continuously, and in that case, there is no need to make the set of points passed at once to the device be the same as the set of points that form one complete picture in an animation. Unless the "repeat this data until I give you more" functionality is used, which it shouldn't be (as I'll get to later), there's no reason why "frames" can't be chopped up and sent in multiple API calls, or combined with multiple frames sent at once.
Now, the alternative to that is to instead have an API that looks like this: "Here is a list of points to be scanned. Add this data to your playback buffer. If the buffer runs out, stop the lasers." This is a lot simpler, it makes better use of buffer capacity, it produces the same output in the no-repeating-frames case, and it does not promulgate the unfortunate myth that the galvos or DAC or any actual hardware knows or cares when one image turns into another.
(And, actually, procedural abstracts as in DigiSynth and LSX look so damn good because there is no point in time where one image turns into the next - the pattern evolves as it is being drawn.)
There's one important difference between a "streaming" protocol and a "frame-based" protocol that's important, and that has to do with what happens if there's a problem - if connectivity to the DAC drops, or the software crashes/glitches, or some such thing goes wrong. A frame-based DAC thinks it knows what one "frame" is, and so it has the option of repeating that frame until the glitch goes away. But this should not be used, because (a) an animation freezing doesn't look any better than the animation vanishing, and (b) it's unsafe.
Whether a DAC interface works in one of those two ways has nothing to do with "framing" in the protocol it uses underneath. My Ether Dream DAC protocol involves sending simple commands over TCP/IP, like "start playback", "stop playback", "here are 50 points to add to the buffer", etc. How those commands are divided up needs not correspond at all with which points are part of which image in an animation, or with which bytes happen to be sent as part of which TCP/IP packet or Ethernet frame.
Sound cards aren't built around the idea that you send them one beat at a time or one measure at a time. Laser DACs shouldn't work in the analogous way either.


Reply With Quote
