I haven't heard anything about the hardware. I and some others have been working on an Open DLL API interface type thing that should be released out into the wild here shortly (within this month).
I haven't heard anything about the hardware. I and some others have been working on an Open DLL API interface type thing that should be released out into the wild here shortly (within this month).
bump
any updates? It would be great to see this come to fruition!
I think there will be some significant things to report here in August. The Open Laser Show Controller API (DLL framework... what ever you want to call it it) is up on my web site. http://www.fab-favreau.com/olsc.html
There will be an open source DAC out soon. If you are going to SELEM you will get to see it. Sorry about the lack of information. Details, pictures, and pudding coming this month.
Both 12-bit DACs and Ethernet are pretty easy to do via integrated SPI chips. The MCP4921 or MCP4922 is a very usable DAC for laser show applications, it has 12 bits of resolution, addressable via SPI and is fast enough for anything you might need.
Ethernet is also available via some SPI chip, and so is (playback from) an SD card.
One thing I am wondering about is the software implementation: Why develop a new API from scratch (and expect show programs to support it) instead of implementing OpenLase, or the Easylase USB API for that matter?
As an alternative on the hardware front, you can look at RAMDACs used in VGA display cards (instead of trying to use the card as a whole). However, they may be more of a compromise on resolution, in favor of speed. As for USB, why use the FTDI chip and not program an AVR with libusb or V-USB directly?
There many different ways to build them without spending too much money. You can even buy a complete Atmel development board that is more than capable of handling all of the requirements for a very nice DAC for around $50. The only missing piece is the true DAC portion but that only amounts to a few chips. I'm interested in seeing what cfavreau has come up with. I'm sure it will be something cool based on what I have seen from him in the past.
As far as an API goes, I would propose something other than the double frame buffer type of thing that past DACs have used. I think that streaming is the way to go. Or at least it should be offered as a different method of sending data. I have my reasons.
If you want to support Ethernet in the future, it's a good idea to at least have some degree of framing in your protocol, since Ethernet is transmitted as discrete packets instead of a continuous stream.
I think it may be a good option to get at least the API documentation and some of the 'under the hood' systems published somewhere, so everybody can extend on the DAC design as it stands now. There are a lot of interesting options we can still implement.
One thing I would like to see are built-in effects like rotation and scaling controlled through DMX, which is particularly useful for Ethernet-DACs in multi-projector setups. You can run the same ILDA stream to a set of projectors and have them project a different effect by scaling, mirroring or rotating each projector's image individually.
I was aware of that, my point is that it would make sense to match the transmission frame and display frame format. That way, you can at least handle some degree of error in case the data stream is interrupted.
Also, it opens up some interesting opportunities if you implement transformations, as you can upload less frame data (best case, a single frame) and do the rest of the work in the projector or DAC using transformations and other procedural elements on the fly. That would also make for more dynamic and interesting shows while requiring less infrastructure.
I don't agree with either of those, actually. But, I don't have time to go into it now. Time for work.
I don't follow your line of thinking. Your standard MTU is 1500 bytes including headers. So unless your display frame is lower than this, or you send jumbo frames (which you need to ensure is supported by the layer-2 switching hardware) i don't think it will work.
I don't know why it's important anyway - you typically just write/read the data to/from an open socket - the TCP/IP stack handles errors in transmission, stuffing of payloads into packets, and framing etc. Why would you want to complicate things ?
If anything, build some form of error handling into the application to ensure the payloads aren't being corrupted - but even that would be overkill, as TCP/IP is doing all of that for you already....
Now proudly stocking and offering the best deals on laser-wave
www.lasershowparts.com
http://stores.ebay.com.au/Lasershow-Parts