June 2024
The AI test phase at Content5 has been completed and our first evaluation results are ready. In this blog post, we want to report our most important findings: Did we succeed in researching and working better and more efficiently with the help of AI chatbots? How do we want to integrate them into our work in the future on a permanent basis?
In our trial phase, we focused on ChatGPT (OpenAI), Copilot (Microsoft), Perplexity (Perplexity AI) and Le Chat (Mistral AI). It quickly became clear that no two AI chatbots are the same, and their results and the way these are processed differ greatly in some cases. One AI is very good at clearly presenting results, while the other provides more valid search results and better sources. It is therefore not possible to make a clear determination as to which tool is best suited for us—it depends on the specific question. Regardless of the idiosyncrasies of the individual tools, we can draw these important conclusions from the test:
- Depending on the issue at hand, the use of AI can certainly help increase the quality and efficiency of our research.
- If our research analysts already have a great deal of prior knowledge in a subject area, the AI results are often too superficial or not precise enough and tend to be of little help.
- The AIs repeatedly show serious weaknesses. From broken source links, outdated or incorrect data to erroneous citations, everything was included. The validation and verification of the results by a research analyst remains absolutely essential.
Based on this, some use cases are emerging in which we will add AI to our toolbox in the future:
- Summarising texts and websites for a quick overview of the content
- Introduction to new topics as a basis for in-depth research
- Suggestions and inspiration for analyses and texts
- Answering technical questions (e.g. formulating Excel formulas, help with IT problems)
- Support during research (e.g. generation of search terms, identification of additional sources, creation of lists)
Our next step is to identify and test AI tools for other use cases, for example, to support our weekly news monitorings or to analyse large volumes of data. One thing is certain: if we want to use AI productively, we need to build on our experience and establish best practices.
