Cloudastructure recently held a webinar on May 4 with Founder & CEO of Cloudastructure Rick Bentley and Founder, CEO and Editor Chris Lustrino of Kingscrowd shared and debated the criteria they personally look for when they’re deliberating an AI investment. You can watch the full recording here, or read on for a synopsis of their discussion and recommendations.
What is the revenue model?
An AI startup's revenue model is a critical factor to consider when evaluating an investment opportunity. As Rick says, “First and foremost you need “a” revenue model. Microsoft, Google and others put a lot of money into chatbots with no real revenue model to be seen yet. Not a big deal? Well, Facebook put a whole lot of money into virtual reality without any revenue model and the results were as you might expect. As big companies, they can afford such losses. Smaller companies need to start with a business model in mind.
OpenAI is an interesting example. It started as a non-profit and then over time they saw a revenue model that so far looks very promising.
Investors should carefully evaluate the startup's revenue model to ensure that it is sustainable and can generate long-term revenue growth.
Attractive revenue models
According to Chris, right now one of the most attractive revenue models is the consultancy model. Instead of hiring a team of consultants to do work that takes hours and is very costly (with low margins), you can use AI to do the work and dramatically improve the output while reducing costs.
Rick adds that right now the market is ripe for a land grab where first mover advantage is going to be huge. “Look at Craig’s List. It’s a web 1.0 product but it still works and is highly used.” Rick also suggests that investors look for legal constraints that could pop up in certain states or trade union policies that might limit the power and applicability of some of these AI solutions. If a chatbot gives medical or legal advice, both highly regulated industries, what are the implications?
Does the company own the customer or is their AI someone else’s cost center?
Let’s start out with what Rick means by this: If you have a computer vision technology for video surveillance, for example, you can sell it direct to the end user or you can sell it to someone else who sells it to the end user. If you do the latter, however, you are just a cost center and your customer may consistently try to push your price down or even switch to a competitor. Rick goes on to suggest that there is a lot of middleware out there that looks good now but may have no value in the next 18 months.
Conversely, Chris counters there are a lot of middlemen out there now who are making money and also those who have proprietary databases - even if they don’t own the customer or end solution, having a proprietary database will still be valuable and monetizable. There are a lot of new monetization and business models that we don’t even know yet and that changes the paradigm of what the customer and company interaction looks like. Rick agrees and says there is a time window where middleware is valuable and we may be in it right now.
Do they utilize Manual (people) or Machine Learning?
Chris starts off by recalling X.ai, a very early AI company that sold for a very small amount of money in 2012/2013. The founder said at some point “we were too early and the reality was that most of our business was human run,” meaning that they didn’t have enough data to make it work with AI. This was not uncommon in the mid 2000’s, but AI is a lot more sophisticated today.
What is Artisanal vs. Real AI
Something that is artisanal essentially means that an artist has created it. The first chatbots were artisanal. The chatbots relayed an intro line, responses were pre-programmed, based on the workflows which were manually set up, but they didn’t have Machine Learning: responses did not evolve from the customer data the way that Chat GPT does. Chris also cited Drift. Hot just a few years ago, Drift was a chatbot plug-in that could answer questions and automate responses, but Drift was not using AI. The bot was just following prompts from an existing workflow. It was not learning from the data coming in or out as it went so it never got smarter.
Bottom Line for Investors: If companies are doing machine learning they will generally let you know they are. Those who say they are AI and don’t mention Machine Learning – well, you may want to take a closer look.
Is that Machine Learning Supervised or Unsupervised?
This is where it comes full circle. If you leave Machine Learning entirely up to the computer, it can come back with crazy stuff. Microsoft’s Tay is the perfect example: Microsoft released Tay in 2016 on Twitter, and it took less than 24 hours for the Twitter audience to turn Tay into a fire-breathing racist. Microsoft shut Tay down and apologized. Conversely, ChatGPT does offer supervised learning, and the content is curated. At Cloudastructure, we not only do supervised learning in-house, but encourage and train our customers to continue to curate their surveillance content, particularly when it comes to facial recognition: every now and again the computer gets it wrong, and by telling the computer it is wrong, the AI improves.
Bottom line: There always should be a human in the loop to monitor and accelerate the AI model. Startups that utilize supervised learning have a significant advantage over those that do not.
Is the AI generating original data that scales?
If you put bad data in, you get bad data out. You need quality data to make these tools work, and the data set needs to be complete and accurate. Rick concurs and cites Tesla. Tesla took the time to put cameras on all the cars to collect countless amounts of real life data on the road. The video is the input: what the driver sees. The brakes/steering/etc is the output, what the drive does. Competitors cited it was fine to just focus on the output, what the driver does, and they’d be able to train a computer to drive a car too. As a result, Tesla has a more complete data set, including the video (input) that it uses and as a result has the world’s best self driving system.
Chris cites that this concept also applies to investment models. Investment models should go all the way back in time and also include other data sets such as the news. When these events happened, this happened to the market, etc. Again the more complete the data set the smarter the AI will be.
Bottom Line: Investors should look for startups that can collect and analyze vast amounts of data from multiple sources and use that data to train their AI models.
What does comprehensive data governance mean:
While KingsCrowd is not an AI company per se, it is a database company, just as AI companies are database companies and comprehensive data governance is vital. Chris explained that at Kingscrowd, they have three layers of data checks, because once they publish that data, it is often seen as truth. Checks and balances matter. An AI company has to have a system for spotting potential biases: are they using multiple news sources, or just one? The startup needs to consider all viewpoints and provide them to the AI training model.
Bottom Line: We are moving to a world where if you are a technology company, you are essentially becoming a database. And your propriety database gives you the power to create great AI bots. But you need to make sure you protect your database and monitor all the inputs going into it.
How important is computing power?
In general, according to Chris, the companies that have AI bots that can be trained on larger data sets with greater computing power are the ones that will have the best bots out there. On the other hand, Chris noted Sam Altman, CEO of OpenAI has recently announced that they don’t plan on making their models any bigger.
Rick countered that early on, companies such as Facebook, Google, Microsoft can put a lot of resources into building some of these initial proprietary platforms, but in the very near future Rick believes that these tools will move to OpenSource where the compute power is less important, since OpenSource community can do the compute, and/or build their tools off the work of the first movers.
Bottom Line: Chris thinks companies that have the most complete data sets and the greatest compute power will win - at least for now. Rick countered that while that may be true for the “pioneers” doing the pure research in the wilderness of AI, it won’t necessarily be true for the “settlers” who find specific use cases for the AI model. Google’s release of Tensorflow was a great example: the method went OpenSource and multiple machine learning engineers came up with use cases and built off of it.
Is the company creating IP protectable models?
Apple versus Android. If you protect your proprietary data set, you can sell this as a premium data offering because no one has this data other than you. This in turn creates a higher margin business and more value for investors.
Rick agreed to some extent. Rick recalls that Oracle created a very popular proprietary database back in the day and made a killing doing so. Now, of course, we see that the market is full of open source database competitors and Oracle’s market share is not what it once was, but Larry Ellison still enjoys the profits. Rick believes that, perhaps initially, proprietary databases will enjoy bigger profits and market share, however overtime opensource will come in and win in the end. He thinks that time is coming soon for AI platforms.
Why are adaptable and modifiable interfaces and high quality user experience important to consider?
There are a lot of companies getting excited about AI and Chris sees a lot of startups getting excited about AI, developing something cool and then looking for investment. But what he is surprised about is how many of these companies are just building something because it’s cool and not considering if the technology is useful, monetizable or valuable to the customer. It’s amazing how many startups Chris sees that are building stuff just because it’s cool. So again, be careful when looking at the AI opportunity and make sure there is a revenue model behind it.
Where do ethics and parameters come in on AI & Machine learning?
Where things get dangerous is self-replication. Once you lose the off switch and once you lose the ability to insert supervised learning you have lost control. The cluster can go off and create its own followers, for example. This may still be more of a Skynet SciFifuture scenario, at least for now. After all, as long as you pay the power bill, you have control over what the computer can do.
What businesses will be impacted by AI and Machine Learning
Chris starts with a story about a company called Chegg which sells learning tools for students in school. Their stock plummeted 40% recently and they cited the reason being that students are using AI tools to write papers and take tests for them, instead of actually reading books.
Rick suggests that originally we thought it was going to be the truck driver or warehouse worker that would be replaced. That may not end up being the case. Medical advice, legal advice, write my research paper, etc. seem to be the most popular applications of AI right now. It’s the white collar highly trained individual who can be replaced. The truck driver still needs to get out of the truck if its raining and the road is closed and needs to find a different way along a route that’s not on google maps, etc. Or, you have to get out and change the tire. There are edge cases that humans do well. But when you are in narrow fields like “explain the five points of intellectual property law” there isn’t a lot of room to go off the rails. It’s perfect for AI and machine learning.
In terms of which companies it ends up hitting it’s hard to say. Lawyers could get together and say this thing can’t give legal advice without a license to practice law. However, what happens when the AI then aces the bar exam and applies for a law license? It will be an interesting future.
Need more context? Feel free to watch the webinar recording here. OR, if you’d like to keep Rick and Chris’s criteria for your future AI investments, download our handy cheat sheet! And don’t forget to register for our next webinar:
Breaking Down Barriers: Crypto, AI and Crowdfunding
Thursday June 15th
9 am PT
And don’t forget to use the KingsCrowd coupon AI 30 for a free 30 day Edge Pro membership to KingsCrowd!