Trial Year ahead for AI After the Hype: Graph and AI Trends

From Bernhard Lück | Translated by AI 8 min Reading Time

Related Vendors

AI is forging ahead and moving from experiment to practice. This raises two questions: What works? And where is it worthwhile? Graph database provider Neo4j takes a look at new forms of AI, the role of graphs and future killer criteria.

AI and graph technology - the experts from graph database provider Neo4j take a look at the trends for 2025.(Image: © Getty Images)
AI and graph technology - the experts from graph database provider Neo4j take a look at the trends for 2025.
(Image: © Getty Images)

While work is already underway on the next AI coup on the one hand, companies are still struggling with integration on the other. Neo4j has summarized the 2025 trends around graph databases and AI.

1. AI Adoption Between Vision and Reality

Sometimes AI seems like one big gamble with an uncertain outcome. The exorbitant investments have by no means decreased in 2024. Cloud hyperscalers are expanding their computing capacities, AI providers are feeding their models and superchip manufacturer Nvidia is rushing from one record to the next. In the working environment, GenAI is no longer a newcomer, but a daily assistant. There is hardly a developer who does not use it when programming. In Germany, ChatGPT & Co. are so popular that almost half of employees (49%) would continue to use their AI solutions even if their boss banned them.

And yet: the introduction of AI in companies is proving difficult in many places. Europe is lagging behind the rest of the world. In Germany, companies are struggling with regulatory uncertainties, a lack of strategies and suitable use cases. Despite this, expenditure on AI solutions and features is continuously increasing and putting pressure on IT budgets. The expected benefits of investments are often a long time coming. According to Gartner Hype Cycle, the miracle tool GenAI is on its way to the valley of disillusionment and will first have to prove what it can really do next year.

2. Agentic AI: Agents on the Rise

While companies are still working on practical implementation, the development of AI continues unabated. In 2023, users chatted with chatbots. In 2024, AI agents will take over entire workflows and routine tasks. We are talking about agentic AI, which has access to a range of tools (e.g. database, interfaces or service integrations). The agent-based AI has "chaining" capability and can therefore divide a query into individual steps and process them sequentially and iteratively. It acts dynamically, plans and changes actions depending on the context and delegates subtasks to various tools.

Agentic AI is not new. Next year, however, AI could have a similar success story to GenAI. In the fall, Anthropic presented AI agents in Claude, which operate the computer almost like a human and type, click and surf the Internet for information independently. Microsoft has also launched its own agents, which will perform tasks in sales, customer support and accounting in the future.

Outsourcing routine tasks to AI sounds tempting, but at the same time makes you feel uneasy. How can the agents be controlled and tamed in an emergency? Who takes responsibility if something goes wrong? It's one thing to ask a chatbot to suggest a reply to an email. It's another for the AI to compose and send the message to the business partner on its own. Especially as the agents make mistakes and even get distracted. Anthropics Claude, for example, suddenly took a break during a demo and started searching the internet for photos of Yellowstone National Park. Criteria still need to be defined on how to check for correct execution and how to react in the event of an error.

3. Reasoning AI: Thinking Aloud in the Black Box

Reasoning AI is also not entirely new, but highly interesting. As with GenAI, LLMs generate answers here, but take considerably more time to "think aloud" about the question in a certain way. The models consider options, draft solutions and discard them again before coming up with a suggestion. This takes longer, but the quality of the results is significantly higher. OpenAI's AI model o1 even made it into the top 500 of the US Mathematical Olympiad (AIME) with such logical, mathematical capabilities.

Reasoning AI also has a problem: the "chain of thought" takes place hidden in the LLM and cannot be seen from the outside. The "loud reasoning" of the AI therefore actually takes place in silence and significantly undermines trustworthiness. In addition, the increased runtime and costs are more suitable for individual research tasks than for end users.

4. Artificial General Intelligence (AGI)

While Agentic AI and Reasoning AI are already a reality, Artificial General Intelligence (AGI) remains science fiction, at least for the time being. There is still a long and largely hypothetical way to go before artificial intelligence catches up with or even surpasses generic human intelligence. As impressive as the leap in AI in recent years seems, there are still very simple tasks where AI fails dramatically (e.g. scrolling and drag-and-drop functions). Moreover, it is unclear whether the path AI development is currently on will ultimately lead to AGI at all and - perhaps more importantly - whether this kind of general higher intelligence is necessary and wanted at all. In many cases, it will be more a matter of specializing AI.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

5. Small Language Models (SML): Vertical and Economical

Instead of science fiction, companies in 2025 will primarily be concerned with using existing AI technologies effectively in practice. Integration is not only a question of compliance and expertise, but also a question of cost. As soon as AI is used on a large scale, it is not exactly cheap (just like the cloud). In addition, LLM trained on publicly available data creates little room to differentiate itself from other users and competitors on the market. Companies are therefore increasingly turning their attention to vertical AI that is precisely tailored to individual use cases and needs and is continuously refined, optimized and adapted (post-training).

Instead of large language models, small language models (SLMs) are increasingly being chosen, as they can easily compete with the big ones in terms of performance in domain- and industry-specific areas. Their advantage: the small models can be better controlled and validated (e.g. via knowledge graphs), training with high-quality data is faster and they require less than five percent of the energy consumption of LLMs - not an insignificant point in view of the EU Green Deal and ESG reporting for companies. In addition, good LLMs can be used to generate high-quality synthetic training data for SLMs so that they can be practically "trained".

6. All You Need is Data

While AI providers such as OpenAI, Anthropic, IBM and Google are slowly running out of public data, companies are primarily concerned with using their own data. How powerful the AI actually is depends on the ability of those responsible to link and enrich the models with their own data sets - from Retrieval Augmented Generation (RAG) to fine-tuning and training their own models. Data quality is therefore crucial. In most cases, organizations have sufficiently structured data that already represents the essence of their business operations.

As important as structured data is, it only accounts for ten percent of the available data. The other 90 percent consists of unstructured data (e.g. documents, video, images). GenAI, natural language processing and graph technology help to make this data usable. Knowledge graphs, for example, represent unstructured data in such a way that LLM can "understand" it as context. Thanks to the graph structure, they retain their richness.

7. Graph Technology Takes Center Stage

As a network of information at all levels, graphs offer an ideal representation of data - whether structured or unstructured. GraphRAG is a good example of this. The RAG approach provides a knowledge graph as an additional source of domain-specific data in GenAI applications. This makes results more accurate, more up-to-date, more explainable and more transparent. Graph patterns are playing an increasingly important role here. These patterns represent differentiated information and can answer certain types of complex questions.

Graph neural networks (GNNs) are another example of how AI and graphs are interlinked. These neural networks attempt to solve particularly difficult problems. Google DeepMind has been working with GNNs for years on numerous projects, such as an intelligent weather forecast (GraphCast) or an AI-supported design of semiconductors (AlphaChip). In November 2024, the company released the third version of AlphaFold, an AI system that can precisely predict the structure of proteins and their interactions with other biomolecules.

The interaction between graphs and AI also runs in the other direction. For example, LLMs help with graph modeling, improve domain and model understanding, communicate and interact with the data stored in the graph and identify and create new links.

8. Evaluation of AI by AI

The starting point for developing AI applications is now quite simple in view of such technologies. Validating the application and reliably transferring it to production, on the other hand, takes a lot of time and effort. LLMs work probabilistically, i.e. they only generate statements with a certain probability. Evaluation will therefore be the topic in 2025. Control and feedback mechanisms are urgently needed to avoid error propagation, check data quality and comply with regulatory guidelines.

Conventional approaches often fall short. AI is used again to control the AI. Referee LLMs can, for example, scrutinize the results of another LLM and check for the correctness or appropriateness of the question as well as for inappropriate or illegal content. AI-based fairness toolkits test for data bias. Anthropic is currently researching so-called interpretable features, which are contained in the models themselves and influence GenAI results in a certain direction. If implemented correctly, these tendencies could be controlled and then serve as safety mechanisms.

9. Lingua franca for the AI world

AI interacts with people, machines and other AI models. Sure, in a chatbot, the AI responds in natural language. But the tech world is multilingual, other systems use other languages (e.g. query languages, API code) and in the future, AI models will increasingly communicate with each other. The more the integration of AI into existing IT infrastructures increases, the more important it is to develop appropriate "language" interfaces. In the graph environment, for example, LLMs serve as interpreters to translate questions in natural language into the query language Cypher (Text2Cypher). In the long term, the question arises as to whether a standardized lingua franca is needed to ensure lasting communication and avoid a Tower of Babel-style linguistic chaos. Or whether it is precisely the flexibility of natural and technical languages that represents a major advantage in the use of LLM.

10. Integration Instead of Major Redesign

In the last two years, there has been a lot of talk about building GenAI systems from scratch and starting from scratch. However, the reality is different: Companies are looking at an existing and complex IT infrastructure that cannot simply be replaced. In practice, it will therefore mainly be a matter of integrating AI components in a meaningful way or adding AI capabilities to existing solutions and systems.

At an operational level, a basic framework is needed in which guidelines are established, processes are standardized and goals are defined, ideally with the involvement of all AI stakeholders in the company (e.g. C-level, development team, IT, compliance, specialist departments). At a technical level, AI needs to be packaged in the form of encapsulated and integrable components and integrated at selected points (e.g. user interaction, data analysis). How to manage this growing complexity of architectures will remain one of the big questions of the coming years - and one that AI may be able to help with.

Article source: BigData-Insider