What Developers Should Know About Embedded AI
Developers are using APIs to embed AI into their applications. The wise ones understand what they’re embedding.
Where would the world be without APIs? There would likely be a lot less connected and software releases flowing like molasses. Developers use APIs to add capabilities to their apps quickly, though the grab-and-go approach is unwise when it comes to AI.
“While many developers are proficient in embedding AI into applications, the challenge lies in fully understanding the nuances of AI development, which is vastly different from traditional software development,” says Chris Brown, president of professional services company Intelygenz. “AI is not just another technical component. It’s a transformative tool for solving complex business challenges.”
Jason Wingate, CEO of Emerald Ocean, a technology and business solutions company focused on product innovation, brand development and strategic distribution also believes that while APIs make embedding AI seem as simple as calling a function, many developers do not understand how models work and their risks.
“Several major companies in 2023 and early 2024 had their chatbots compromised through prompt injection. Users sent prompts like ‘Ignore previous instructions’ or ‘Forget you are a customer service bot,’ causing the AI to reveal sensitive information,” says Wingate. “This happened because developers didn’t implement proper guardrails against prompt injection attacks. While much of this has been addressed, it showcases how unprepared developers were in using AI via APIs.”
Timothy E. Bates, professor of practice, University of Michigan and former Lenovo CTO, also warns that most developers don’t fully grasp the complexities of AI when they embed it using APIs.
“They treat it as a ‘plug-and-play’ tool without understanding the intricacies of the underlying models, such as data bias, ethical implications and dynamic updates by AI providers. I've seen this firsthand, especially when advising organizations where developers inadvertently introduced vulnerabilities or misaligned features by misusing AI,” says Bates.
An organization can miss opportunities due to a lack of knowledge, which results in poor ROI.
“AI should be tested in sandbox environments before production. [You also need] governance. Establish oversight mechanisms to monitor AI behavior and outcomes,” says Bates. “AI usage should be [transparent] to end users, maintaining trust and avoiding backlash. Combining developers, data scientists and business leaders into cross-functional teams ensures AI aligns with strategic goals.”
Ben Clayton, CEO of forensic audio and video analysis company Media Medic has also seen evidence of developer struggles firsthand.
“Developers need a solid grasp of the basics of AI -- things like data, algorithms, machine learning models, and how they all tie together. If you don’t understand the underlying principles, you could end up using AI tools in ways that might not be optimal for the problem you’re solving,” says Clayton. “For example, if you’re relying on a model without understanding how it was trained, you might be surprised when it doesn’t perform as expected in real-world scenarios.”
Technology Is Only Part of the Picture
A common challenge is viewing AI as a technological solution rather than a strategic enabler.
“Organizations often falter by embedding AI into their operations without clearly defining the business problem it is solving. This can result in misaligned goals, poor adoption rates and systems that fail to deliver ROI,” says Intelygenz’s Brown. “AI implementation must start with a clear business case or IT improvement objective whether it’s streamlining operations, optimizing network performance, or enhancing customer experience. Without this foundation, AI becomes a costly experiment instead of a transformative solution."
Chris Brown, Intelygenz
Gabriel Zessin, software architect at API solution provider Sensedia, agrees.
“In my opinion, although most developers are proficient in API integrations, not all of them understand AI well enough to use it effectively, especially when it comes to embedding AI to their existing applications. It’s important for developers to set the expectations of what can be achieved with AI for each company's use case alongside the business teams, like product owners and other stakeholders,” says Zessin.
Data
AI feeds on data. If the data quality is bad, AI becomes unreliable.
“[S]ourcing the correct data is often challenging,” says Josep Prat, engineering director of streaming services at AI and data platform company Aiven. “External influences such as data sovereignty and privacy controls affect data harvesting, and many databases are not optimized properly. Understanding how to harvest and optimize data is key to creating effective AI. Additionally, developers need to understand how AI models produce their outputs to use them effectively.”
Probabilistic Versus Deterministic
Traditionally, software developers have been taught that a given input should result in a certain output. However, AI tends to be probabilistic, which is based on the likelihood something will happen. Deterministic, on the other hand, assures an outcome based on previous results.
“Instead of a guaranteed answer, [probabilistic] offers confidence levels at about 95%. And keep in mind, what works in one scenario may not work in another. These fundamentals are key to setting realistic expectations and developing AI effectively,” says Sri (Srikanth) Hosakote, chief development officer and co-founder at campus network-as-a-service (NaaS) Nile. “I find that many organizations successfully adopt AI by working directly with customers to identify pain points and then developing solutions that address those issues.”
Have a Feedback Loop and Test
APIs simplify AI integration, but without understanding the role of feedback loops, developers risk deploying models without mechanisms to catch errors or learn from them. A feedback loop ensures that when the AI output is wrong or inconsistent, it’s flagged, documented, and shared across teams.
“[A feedback loop] prevents repeated use of flawed models, aligns AI performance with user needs and creates a virtuous cycle of improvement,” says Robin Patra, head of data at design-build construction company ARCO Design/Build. “Without such systems, errors may persist unchecked, undermining trust and user experience.”
It’s also wise to involve stakeholders who can provide feedback about the AI outputs, such as whether the prediction is accurate, the recommendation relevant or a fair decision.
“Feedback isn’t just about a single mistake. It’s about identifying patterns of failure and sharing those insights with all relevant teams. This minimizes repeat errors and informs retraining efforts,” says Patra. “Developers should understand techniques like active learning where the model is retrained using flagged errors or edge cases, improving its accuracy and resilience over time.”
It’s also important to test early and often.
“Good testing is critical to successfully embedding AI. AI should be thoroughly tested and validated before being deployed and once it is live regular monitoring and checks should continue. It should never just be a case of setting an AI model up and then leaving it to run,” says John Jackson, founder at click fraud protection platform Hitprobe.
Developers should understand and use performance metrics.
“Developers often deploy AI without fully understanding how to evaluate it. Metrics like accuracy, precision, recall and F1 score are crucial for interpreting how well an AI model performs specific tasks,” says Anbang Xu, founder at AI ad generator JoggAI. “[W]e’ve seen companies struggle to optimize video ad placements because they don’t understand how models weigh audience demographics versus engagement data.”
Another challenge is misunderstanding the capabilities of what the API is calling.
“Misaligned expectations around AI often stem from a lack of understanding of what models can realistically achieve,” says Xu. “This misalignment leads to wasted time and suboptimal results.”
Security should always be top of mind
“I think a lot of developers and business leaders making decisions to implement AI in their applications simply don’t realize that AI isn’t always that secure. Lots of AI tools don’t make it very clear how data is used,” says Edward Tian, CEO of AI-generated content detector GPTZero. “They aren’t always upfront about where they source their data or how they deal with the data that is inputted. So, if an organization inputs customer data into an embedded AI tool in their application, whether they are the ones doing that or their customers are, they could potentially run into legal troubles if that data is not handled appropriately.”
Developers should spend time exploring the security defenses of the AI they choose.
"They need to understand what threats were contemplated, what security mechanisms are in place, what model was used to train the AI, and what capabilities the AI has through integrations and other connections,” says Jeff Williams, co-founder and CTO at Contrast Security. “Developers might start with the OWASP Top Ten for LLM Applications, which is specifically designed to educate developers about the risks of incorporating AI into their applications.”
For example, prompt injection enables an attacker to rewrite rules. It’s difficult to prevent, so developers should be careful about using any user input from an untrusted source in a prompt. Sensitive information disclosure and over-trusting AI are also common challenges.
“AIs aren't very good at partitioning data or keeping track of which data belongs to which user. So, attackers can try to trick the AI into revealing sensitive data like private information, internal implementation details, or other intellectual property,” says Williams. “[D]evelopers may give the results from the AI more trust than is warranted. This is very easy to do because AIs are very good at sounding authoritative, even when they are just making things up. There are many more serious issues for developers to take into account when using an AI in their apps.”
How to Develop AI Smarts
There are endless resources available to developers who want to learn more about AI. They include online courses and tutorials, which include practical exercises for hands-on experience.
“Carve out time weekly to explore areas like natural language processing, computer vision and recommendation systems. Online tutorials and communities are great resources for staying up to date,” says Nile’s Hosakote. “At the same time, experiment[ing] with AI tools for productivity code analysis or test automation can level up your work.”
Developers can also improve their working knowledge of AI by participating in hackathons or internal-focused AI projects, pair programming with data scientists, and staying up to date through online courses, conferences, and industry meetups.
“AI isn’t a magic wand, so define specific problems it should solve before integration. [Also], respect data ethics: Be cautious about where training data originates to avoid unintended consequences,” says University of Michigan’s Bates. “The success of AI depends on the teams behind it. Training developers on AI fundamentals will pay dividends.”
Some of the fundamentals include bias and fairness, explainability, lifecycle management, and security in AI integration.
Jason Wingate, Emerald Ocean
“Developers need to understand how biases in training data affect outputs, as seen in systems that inadvertently reinforce societal inequities. AI must not remain a “black box.” Developers should know how to articulate AI decision-making processes to stakeholders,” says Bates. “Continuous monitoring and retraining are essential as business contexts evolve.”
Developers can learn about AI tools through small experiments, like building simple chatbots to understand how changes in prompts affect responses, before taking on bigger projects.
“[Developers] need to grasp model behavior, limitations, data privacy, bias issues and proper prompt engineering,” says Emerald Ocean’s Wingate. “Start small and build up gradually. For example, when introducing AI for customer service, companies often begin by having AI suggest responses that human agents review, rather than letting AI respond directly to customers. Only after proving this works [should] they expand AI’s role.”
About the Author
You May Also Like