At a developers conference I attended not too long ago, attendees did little to hide their disdain every time the term “AI” was bandied about. (And it was bandied about a lot!) So I was careful on a recent call attended by about 250 engineers to preface the AI portion of the discussion with, “I know this will make you cringe, but…” That got a lot of knowing laughs and thumbs-up emojis.
What’s going on here? It’s not that these developers and engineers are against the use of AI; it’s that they are tired of hearing about artificial intelligence as a panacea without pragmatism. They want to hear about how they can pragmatically and easily harness it — now — for real-life use cases.
Indeed, we’ve spent the last few years bombarded by hyperbolic talk about AI (Robotaxis anyone?). How it’s going to transform life as we know it. How it’s going to take our jobs. When it will become sentient…
Meanwhile, AI has kind of quietly become part of the fabric of our lives — not by changing our lives or taking our jobs or becoming sentient, but by making our lives and our jobs easier. For example, when I Googled “When will AI become sentient?” (and “When did Skynet become self-aware,” for comparison purposes), I didn’t have to comb through results one at a time but instead read the AI-generated summary of the most relevant content at the top, with sources. (Spoiler alert: Opinions are mixed.)
There are hundreds of other examples of AI applications that are, well, pretty boring but really useful. What’s a lot less boring right now is scaling and integrating AI across the organization. And that’s where the AI backlash can be leveraged.
Making AI usefully boring
Developers, engineers, operations personnel, enterprise architects, IT managers, and others need AI to be as boring for them as it has become for consumers. They need it not to be a “thing,” but rather something that is managed and integrated seamlessly into — and supported by — the infrastructure stack and the tools they use to do their jobs. They don’t want to endlessly hear about AI; they just want AI to seamlessly work for them so it just works for customers.
Organizations can support that by using tools that are open, transparent, easy to use, compatible with existing systems, and scalable. In other words, boring.
The open source RamaLama project’s stated goal, for example, is to make AI boring through the use of OCI containers. RamaLama facilitates the ability to locally discover, test, learn about, and serve generative AI models — in containers. It first inspects your system for GPU support, defaulting to CPU support if no GPUs are present. It then uses either Podman or Docker (or runs on the local system if neither are present) to pull the OCI image you want with all the software needed to run an AI model with your system’s setup. This eliminates the need for users to perform complex configurations on their systems for AI.
The Ollama project similarly helps users get up and running with AI models locally, but it doesn’t help you run in production. RamaLama goes a step further by helping you push that model into a container image and then push that container image out to a registry. Once you have a container image, you can ship it off, fine-tune it, and bring it back. It gives you the portability of containers for model development.
(My colleagues Dan Walsh and Eric Curtin posted a great video on YouTube that puts RamaLama in perspective, along with a demo.)
RamaLama isn’t the only project or product that can support AI humdrumness, but it’s a great example of the kinds of things to look for when adopting AI systems across the organization.
Right-sizing the models
The models themselves are also, rightly, growing more mainstream. A year ago they were anything but, with talk of potentially gazillions of parameters and fears about the legal, privacy, financial, and even environmental challenges such a data abyss would create.
Those LLLMs (literally large language models) are still out there, and still growing, but many organizations are looking for their models to be far less extreme. They don’t need (or want) a model that includes everything anyone ever learned about anything; rather, they need models that are fine-tuned with data that is relevant to the business, that don’t necessarily require state-of-the-art GPUs, and that promote transparency and trust. As Matt Hicks, CEO of Red Hat, put it, “Small models unlock adoption.”
Similarly, organizations are looking for ways to move AI from the rarefied air of data science to a place where stakeholders across the organization can understand and make use of it as part of their day-to-day work. For developers, this kind of democratization requires tools that enable safe spaces for experimentation with building, testing, and running intelligent applications.
Here’s a provocative premise: LLMs and models are just software. They’re just files and processes, processes which run on CPUs and GPUs.
It just so happens that we have a technology that can help with files and processes — Linux containers. Linux is the default platform for AI development, so it makes sense to use Linux containers, which give developers a safe place to experiment without necessarily putting their data in the cloud. Containers also give developers an easy way to move their applications from those safe spaces to production, without having to worry about infrastructure.
A home in containers
Organizations already have container infrastructure like registry servers, CI/CD testing, and production tools like Kubernetes, with key capabilities like scalability, security, and Linux optimizations for running predictive and generative AI workloads. This approach empowers teams to effectively leverage AI capabilities while maintaining flexibility and control over their data across diverse environments.
The AI wave is no different than that surrounding other transformative technologies. (And, yes, I said it, it is transformative.) Think about how we used to say web-based or cloud-based before everything just became web-based or cloud-based and we didn’t need to add the modifier anymore. But that happened only after similar backlash and efforts to make web- and cloud-based technology more usable. The same will (and is) happening with AI.
—
Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
Source: https://www.infoworld.com/article/3626533/the-ai-backlash-couldnt-have-come-at-a-better-time.html