95% of all AI projects in companies still do not deliver what is expected from them. AI systems require onboarding like new employees and often fail due to a lack of context. A look beyond the hype and an outlook on what really works.
AI systems require onboarding like new employees and often fail due to a lack of context. But behind the scenes, the next generation is already emerging: adaptive data architectures and intelligent knowledge graphs that could finally make AI scalable.
(Image: AI-generated)
After four years of AI hype, many electronics developers are asking a central question: Where is the ROI in edge AI projects? AI chips are expensive, inference runs are unstable, and classic embedded architectures are reaching their limits. The graph experts from Neo4j highlight which trends in 2026 could finally make AI productive in embedded systems.
AI Reality Check: Scaling in Focus
The feedback from companies speaks for itself: Most AI projects are not yet delivering what was expected of them. According to an MIT study, 95% of pilot projects yield no measurable results. Gartner predicts that 40% of agentic AI projects will fail by 2027—hindered by costs, unclear ROI, and unresolved risks. This highlights the GenAI paradox: General AI tools and assistants can be deployed quickly but bring hard-to-measure ROI effects. In contrast, truly value-creating, vertically integrated AI systems are making their way into companies only with great difficulty.
Talking about AI frustration is still premature. The end-user hype around ChatGPT, Copilot & Co., and not least the enormous investment sums of tech giants, have simply raised expectations that collide with reality in companies. Here, AI systems need to be deeply and securely integrated into existing processes, data structures, and IT landscapes. This integration takes time and adjustments. Even AI can only accelerate this process to a limited extent. Additionally, AI is experimental: Many prototype projects must fail in order to reveal which approaches work in the long term. The actual scaling is yet to begin.
AI Agents: The New Trainees
AI agents particularly highlight this paradox. In practice, there is little evidence of autonomous "agent armies" replacing entire departments. Most systems operate hidden in the background, primarily handling time-consuming research tasks, for example, in law, compliance, or medicine. While the majority of companies are experimenting with AI agents according to McKinsey, only 23% actually deploy them in a productive area, and in no function does the share of scaled agents currently exceed around 10%. This shows that their utility is highly context-dependent, and their productive use is narrowly limited. Companies must first ignore the AI hype and soberly identify where agents can create genuine impact.
This iterative approach is justified because AI remains unreliable. This is usually not due to the "lack of intelligence" in the models. Rather, context and instructions were not conveyed clearly enough to guarantee relevant and reliable results. For functional integration, agents require a kind of onboarding: they need to be trained, informed, monitored, and regularly corrected. Since they work probabilistically, they do not necessarily produce the same results even with identical inputs. Validation requires testing, feedback, and review processes. However, this involves effort and is only scalable to a limited extent.
This raises another question beyond technical implementation: How can AI agents be integrated into existing workflows, teams, and corporate culture? Companies must not only invest in the training and setup of AI agents but also in that of their employees. In the future, they will be the ones validating the results of their AI colleagues and understanding the underlying model limitations. This requires rethinking work models, including clear governance, new roles, flatter structures, and especially well-defined responsibilities.
Context Engineering And the Information Architecture for AI
AI is only as good as the context it receives, even in agentic, iterative architectures. However, it often receives too little, too much, or too imprecise input. When prompting, many think of direct instructions. In real applications, however, the system's actual task is to dynamically shape the context so that the LLM receives exactly the information it needs for the next step.
LLMs function in some ways like human working memory: they retain the beginning and the end, but lose track in the middle, as a Stanford University study shows. Long context leads to errors, inefficiencies, and decreased focus (Context Rot). Models become confused when too many or very similar tools are used (Context Confusion). Or they stumble over contradictory work steps (Context Clash). Although the models could theoretically process vast amounts of context, practice shows that the more loaded into the context window, the less reliable the results.
When it comes to AI context, as is often the case in data processing: quality over quantity. Models have only a limited attention span. Every additional context element consumes part of it and dilutes the essential. Well-curated context, known as context engineering, is thus becoming a prerequisite for reliable AI.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Push Vs. Pull: Data On Demand Instead of In Stock
With the advent of agent systems, the way AI accesses information is changing. While earlier approaches like Retrieval-Augmented Generation (RAG) operated on a push principle, a pull principle is now taking hold. Instead of gathering information in advance and feeding it to the model, the AI now decides for itself what information it lacks and retrieves it purposefully using tools. Instead of an information avalanche, genuine information selection emerges.
This means AI is increasingly taking on an organizational role: it analyzes tasks, identifies work steps and necessary information, and selects tools or data sources to fill these gaps. It becomes the coordinator of information procurement, which aligns with the capabilities of the language model. For companies, this means thinking like an information architect. The key is not the quantity, but the correct dosage: the principle of "Minimum Viable Context" (MVC). The AI should receive exactly the information it needs for the next step.
Graphs Are the Navigation System for AI Agents
Which information the AI needs in the next step depends heavily on the specific use case: sometimes deep, linear context chains; sometimes broad, branching knowledge structures; clusters of relevant information; or just a single precise excerpt. This is precisely where traditional data structures begin to falter. Graph databases offer a structurally different approach. Forrester refers to this as the backbone for LLMs to represent, capture, and provide context.
Especially in conjunction with AI agents, graphs will take center stage in 2026. As AI systems increasingly coordinate decisions, tools, and processes independently, they require robust and transparent context models. Graphs link knowledge, actions, and interactions in real time, making agents navigable, verifiable, and scalable. This creates a semantic information layer (Knowledge Layer) that not only enables more precise answers but, above all, fosters agents that understand where they stand, what they are doing, why they are doing it, and what the implications of the next step are.
The Database of the Future is Adaptive
Databases and data infrastructures are thus becoming a pivotal point for AI success. After four years of AI hype, it is becoming increasingly clear: While hardware and models are advancing into new dimensions, the databases beneath them are still rooted in 1970s thinking. AI systems are expected to deliver peak performance but operate on architectures that were never designed for them. The central question is no longer how databases can be improved, but what a database built for AI looks like.
The next-generation AI database could, for example, function similarly to "live code." Queries are iteratively rewritten and optimized during execution, inspired by modern compiler designs like Just-in-Time (JIT) techniques. The execution plan continuously adapts to data distributions, load patterns, and the available hardware. This creates a permanent feedback loop where the database becomes more efficient with each iteration, even as complexity and data volumes grow. This dynamic architecture forms the foundation for the Knowledge Layer that AI agents will need in the future. (heh)