OpenAI has added a new ‘deep research’ mode to ChatGPT, allowing users to perform multi-step research on the web for more complex tasks. Notably, Deep Research is OpenAI’s second AI agent, following the launch of Operator AI for browser-related tasks last month.
What is deep research? How does it work?
Deep Research is powered by OpenAI’s latest o3 reasoning model, optimised for web browsing and data analysis. The latest AI agent searches, interprets and analyses vast amounts of text, images and PDFs on the web to produce a comprehensive report that is close to the level of a research analyst, OpenAI claims.
Unlike normal searches on ChatGPT, deep research queries will take between 5 and 30 minutes to return a result, and the chatbot will send users a notification when their research is complete.
While Deep Research currently only supports text output, OpenAI says it plans to add embedded images, data visualisations and other analytical output in the coming weeks.
Why deep research doesn’t make much sense?
Deep research, much like other AI tools from OpenAI, is still prone to hallucination (making things up), and the company admits as much in its blog post. What’s more, OpenAI also says that deep research can struggle to distinguish “authoritative information from rumour, and currently shows weakness in confidence calibration, often failing to convey uncertainty accurately”.
All of which raises the question of how useful the ‘detailed report’ from deep research really is, if you can’t be sure whether the AI has made up some facts or if the output you’re getting is based on real news sources. Sure, you have the option to go through the detailed citations to verify the information in the deep search, but that more or less negates the purpose of having an AI agent in the first place.
For what it’s worth, OpenAI says that Deep Search will be able to connect to “specialised data sources”, which will include either subscription-based or internal resources, to “make its output even more robust and personalized”.