As organizations and companies get comfortable with the use of artificial intelligence, there is a bias and preference toward commercial large-scale models where organizations can use their agreements and service agreements, and ultimately their enterprise controls, so that they can deploy this technology securely. The challenge is this only works for always-connected solutions, and the greater disadvantage is a lack of autonomy, creativity, innovation, and drift, to name a few of the risks that can occur within products that depend on other third parties to provide artificial intelligence.
This really becomes a greater challenge in LLMs, which is different than a common utility such as power and water, because the LLM has its own risks and changes in the product that ultimately bring consistency and reliance to the forefront of risk if you’re trying to build a product and deliver a consistent solution to your customers.
The Counter Argument: Localized LLMs
The counter argument to the OpenAI Azure relationship and/or GCP with the cloud is for organizations and individuals to adopt localized LLMs. This will begin for individuals who choose to select and do local vector databases (AKA RAG or AG) where they establish personal databases or contextual databases of an organization’s function and intellectual property that then can hook into these larger scale models. This, if you will allow the analogy, is the gateway drug to localized LLMs for business functions such as:
- Legal ops
- Digital ops
- Marketing ops
- Product security ops
These constructs and concepts that are emerging over the last few years bring technology augmentation into classical fixed human functions.
Beyond Basic Use: The Critical Need for Independence
As we look beyond this basic use and look further, we recognize that an always-connected dependency for the technology, for the offering, for the product, for our customers, for security, for national security, and simply for life safety cannot depend on these cloud provider super providers of models.
In fact, the most secure, reliable, life safety-oriented solution is one where you build this artificial intelligence capability locally in models that are open source and are trained and tuned to the task at hand. This allows us to then build fit-for-purpose technology solutions and products that can operate with high reliance, high security, and high confidence in areas of the world where it matters—whether it is in your pocket, in your vehicle, in your airplane, in your building, helping make the food that you eat, etc.
These advantages are not possible in an always-connected super AI Azure or Google Cloud connected deployment. Therefore, we as professionals, engineers, creators, and customers need to find and embrace the solutions that develop and deploy localized LLMs that leverage open source solutions and models that are tested and are known for reliability.
The Security Advantage of Open Source
Open-source models can be better than any supermodel offered by Google or OpenAI or others. This is possible because, much like cryptographic algorithms, the algorithms that are open-source, tested by experts, and vetted in the public have always been and will always be the most secure. Those that are proprietary, closed-source, and limited visibility have historically proven to be insecure, breakable, and unreliable.
As we look at open-source models—models that we have more domain control against that are a smaller footprint—they come as viable alternatives, fit-for-purpose tools, and techniques that we can put in place. These can contain our corporate crown jewels, our IP (securely free from dispersion), and in our sold products operating reliably and meaningfully for our customers. This is the greatest potential for growth velocity of adoption and the benefit of humanity.
The Path Forward: Building Local AI for Products and Business
Cloud providers like OpenAI, Google, and Azure serve important purposes and excel in many applications. They offer valuable resources, computational power, and rapid deployment capabilities that organizations should continue to leverage where appropriate. However, they cannot be our sole pathway to AI implementation.
The evidence throughout this analysis demonstrates that relying exclusively on cloud-based AI creates fundamental vulnerabilities: loss of autonomy and creativity, always-connected dependencies that fail when connectivity is lost, inconsistent performance for customer-facing products, and inability to operate in critical environments where reliability matters most—from vehicles to life safety systems.
The solution is building complementary local AI capabilities alongside cloud services. Organizations that invest in localized, open-source models trained for their specific needs gain the high reliance, high security, and high confidence operation that cloud dependencies cannot provide. They protect their intellectual property, maintain consistency for customers, and ensure their products work reliably whether in a pocket, airplane, or remote facility.
This represents the greatest potential for growth velocity of adoption and benefit of humanity—not through dependence on external providers alone, but through the strategic combination of cloud resources where they excel and local AI where independence, security, and reliability are paramount. The tools exist, the technology is mature, and the choice is clear: build local AI capabilities to complement, not replace, the cloud ecosystem.
- Research and ideas from James DeLuccia who works deeply and practically with the largest companies in the world building the most important technology and products. AI, GEN-AI, Agents, and the the innovations come from hands on work. Go play. Go create. Do good work.