This is the fourth and final video in a series where I test various open-source models with my custom web search AI agent to evaluate their performance. In this video, I benchmark the Mistral AI models Codestral 22B and Mixtral, as well as GPT4o, against Perplexity AI to see if my web search agent measures up.
Need to develop some AI? Let's chat: https://www.brainqub3.com/book-online
Register your interest in the AI Engineering Take-off course: https://www.data-centric-solutions.co...
Hands-on project (build a basic RAG app): https://www.educative.io/projects/bui...
Stay updated on AI, Data Science, and Large Language Models by following me on Medium: / johnadeojo
Build your own local “perplexity” with Ollama: • Build your own Local "Perplexity" with Oll...
How to setup with Llama 70b: • Build Open Source "Perplexity" agent with ...
GitHub repo: https://github.com/john-adeojo/custom...
Multi-Hop Questions: https://arxiv.org/pdf/2108.00573
Codestral model card: https://huggingface.co/mistralai/Code...
Mixtral model card: https://huggingface.co/mistralai/Mixt...
Chapters
Introduction: 00:00
Test Questions and approach: 01:20
Agent Schema: 04:45
Testing Mixtral: 06:56
Results Mixtral: 27:40
Testing Codestral: 30:16
Results Codestral: 48:12
Live run GPT4o: 50:36
Results GPT4o: 01:33:56