Competitive advantage when using AI
The main news yesterday was the results of Boston Consulting Group (BCG) testing the impact of Generative AI on its consultants 👩💻
The results were quite overwhelming: consultants using GPT-4 finished 12.2% more tasks, completed tasks 25.1% more quickly, and produced 40% higher quality results.
A few weeks back, McKinsey unveiled its new internal generative AI tool “Lilli” - a chat application trained on more than 100,000 proprietary internal documents and interview transcripts. Lilli (even in its current beta) has been a huge success internally - both in terms of impact and adoption:
➡️ The tool has already dramatically cut down the time spent on research and planning from weeks to mere hours, and from hours to just minutes in others
➡️ An astounding 66% of employees now use the app multiple times a week
➡️ In the first 2 weeks of August alone, Lilli answered a whopping 50,000 questions
McKinsey concluded that a combination of cost, reliability, and security warranted an in-house solution - despite the higher “tech lift” required. Indeed, McKinsey chose to build its own app on top of other LLM technologies:
🔵 Cohere (most likely using their Embeddings and Semantic Search products); and
🔵 OpenAI Microsoft Azure service (most likely for the “enterprise-friendly” secure GPT-4 instance)
McKinsey's declaration of being "LLM agnostic" is telling. To them, the LLM is a mere means to a desired end: achieving business goals. With Lilli's in-house build, they command the flexibility to swap out underlying technologies as deemed fit.
In many cases, startups will have to offer similar flexibility in terms of the underlying technology to win over enterprises (besides regulation, managing confidential customer data, security, PII requirements, etc.) - something that will likely require a higher engineering lift and will involve continuous testing, QA, etc over multiple ever-so-evolving LLMs available on the market. This might be challenging for smaller startups with leaner technical teams.
☝ However, the McKinsey's and BCG's of the world have one big advantage: data and knowledge that they've gathered from various projects for decades that is probably stored and quite well-indexed.
Data, documentation and knowledge bases will be the key moat and competitive advantage in this new era where models will be increasingly commoditized. Knowledge bases are as important to AI progress as Foundation Models and LLMs.
To compete, make sure your documentation and knowledge bases are the best on the planet. When it comes to knowledge you want to be able to store a lot of it, and you want to be able to find the right piece of knowledge at the right time. For e.g. LLMs this is typically done with a vector database (for now).