Discussion about this post

User's avatar
Michael Woudenberg's avatar

It's all about the macro-micro-macro-micro zoom. Out for context, in for details, back out for alignment, back in for refinement.

Expand full comment
Daniel Nest's avatar

Interesting parallels, although now that you've asked for clarity:

"Deep research" refers to conceptual features used in many AI products rather than separate models. In fact, OpenAI's "deep research" tool is powered by the o3 model under the hood. In the case of Google, the "deep research" model runs on Gemini 2.5. Flash if you're a free user or on Gemini 2.5 Pro if you have a paid Google account.

But I still think your analogy might sort of hold. In the case of OpenAI, you could roughly split it into:

Thinking fast = GPT-4o (quick, low-latency chat model that responds with the first surface tokens it pulls based on its training data.)

"Deep Research," ironically enough, slots better into the "wide" thinking mode, as it typically goes broad to pursue many dozens of sources (or even hundreds, in the case of Google).

But because "deep research" also uses reasoning models under the hood, it can go "deep" into the topic after pulling the sources together.

So it's harder to draw a clear line between reasoning models and deep research, since they work in tandem.

As for dad jokes, I don't think I could do better than "fast, deep, and wide."

It's already too on the penis. Uh, too "on the nose" I meant.

Expand full comment
5 more comments...

No posts