It’s far better to discuss the underlying technologies that make it all work: machine learning, deep learning, natural language processing, knowledge graphs, and especially computer vision.
That last one means a lot to us, because Raspberry Pi has released its AI Camera and I for one can’t wait to start putting it to serious use (the fantastic AI Kit is now an all-in-one AI HAT with a Hailo accelerator running at up to 26 TOPS, enabling a wider range of non-vision ML projects, which is also looking very handsome at this point).
A lot is going on with this equipment, and things are starting to get serious: Sony has been using the AI Camera on its production line. Look at Sony’s AITRIOS developer site for more information on how AI Camera is genuinely useful from an industrial point of view. We’re hoping to get some great tutorials with them lined up
While the usefulness of AI is like Schrodinger’s cat, both existing and not existing at the same time, I think it’s important to remember that the underlying technologies have real and serious purposes in industry, education, and healthcare. It’s not all trying to get ChatGPT to figure out how many times the letter ‘r’ appears in ‘strawberry’.
I recently updated my iPhone to use Apple Intelligence (there are not enough rolling eyes in the world) and it’s producing some handy overviews of email conversations. While this is all well and good, it is Raspberry Pi that’s putting AI in an industrial setting. You can hook an AI Camera to a Raspberry Pi and get it to do real-world things.
My personal involvement with testing and writing about AI started out with Google AIY (which went on to become the Coral toolkit). This add-on to Raspberry Pi gave it a TensorFlow Lite machine learning accelerator and the USB Accelerator with a Google Edge TPU coprocessor provides 4 TOPs (tera-operations per second). We’ve built teachable machines, I’m now delighted to have an AI Camera and AI HAT and am very open to ideas. Let’s build something wild.