First, an apology to subscribers. I have been dealing with a distressing family bereavement, so it has been some time since I posted. Getting back to normal now …
Like everyone else, I can hardly escape the buzz surrounding Large Language Models (LLMs) and Artificial Intelligence. Software vendors of all sizes and in all verticals - perhaps particularly those creating operational applications - are convinced they must embrace AI to stay in the game. They are also convinced they must adopt AI quickly.
The benefits they hope for are simple enough:
Efficiency, if AI can automate repetitive tasks and reduce labourious work.
Improved decision-making if AI-powered analytics can actually provide valuable insights.
A more personalized user experience if AI-driven tools can analyze and respond to user behavior, preferences and requirements.
These are good goals, but where to start? Most consultancies can rattle off a standard answer: identify use cases, develop a strategy, assemble a team, choose a technology, implement and iterate. There you go, an AI strategy ready for use. Build a slide for each clause and you have a deck too.
It can be tempting to just throw data at a generic AI system and ask it for predictions or to find outliers and serve up some recommendations. And you know what? It will do so quite readily. Whether the model makes reliable predictions, finds meaningful outliers or serves useful recommendations … that’s a different matter.
It’s not enough. For one thing, how do we identify use cases? Is the now ubiquitous chatbot really a use case? Here’s a more focussed answer. For almost any business scenario there are three use cases you can (and should) implement quickly and effectively:
Time series prediction.
Outlier or anomaly detection.
Recommendation / association engine.
None of these require artificial intelligence in particular and certainly none require LLMs. But if you don’t have the experience in-house, it can be tempting to just throw data at a generic AI system and ask it for predictions or to find outliers and serve up some recommendations. And you know what? It will do so quite readily. Whether the model makes reliable predictions, finds meaningful outliers or serves useful recommendations … that’s a different matter. But identifying the use cases should be straightforward.
Recently I have spoken with three AI service providers, all with different approaches, but all specialising in getting you reasonably quickly and effectively onto the AI bandwagon. Yes, it is a bandwagon and you know you want to be on it!
In strictly alphabetical order …
Defog.ai embeds familiar Q&A-style interface for analytics in your app, but it uses the metadata of your database (for SQL) or unstructured data that you load. It’s an effective way of adding some natural language capabilities to your system without compromising the privacy of your data. For simple chatbot style interfaces in data-sensitive domains, it’s a very promising solution.
Konverge.ai is a full-service offshore data science service with a very comprehensive approach (and a skilled team) to perform detailed requirements analysis, prototyping and delivery.
Rapyd.ai, based in Germany and Boston, USA, can deliver AI prototypes quickly and effectively, especially in the fields of computer vision and natural language.
I’d be happy to connect readers with any of these teams directly. Ping me on donald.farmer@treehivestrategy.com and I’ll make introductions.
Thanks Donald Farmer! Defog has potential, I tried it with a sample dataset (just a CSV). It is not as smart with words as ChatGPT so I had to adjust my question for it to write a proper SQL and return me the result. But pretty good!