As 2023 draws to a close, it’s time to reflect on TrueFoundry’s journey over the past year. This reflection isn’t just a celebration of our achievements but also an acknowledgment of the challenges we’ve navigated, appreciation of the opportunities we have been presented with and the learnings we’ve embraced. Instead of focusing on operational details, we will walk you through a chronological journey of learnings and realizations indexed on our thesis on MLOPs- and how things played out in reality.
I am a personally a space enthusiast, so I find the traditional analogy of startups as rocket ships befitting. If I had to describe the timelines imagining we are building a rocket ship then- 2022 was the year of putting together the engine and taking a test ride, whereas 2023 is when we set the course for the stars and secured the thrusters for our cosmic odyssey! I am very excited to walk you through our journey of 2023, but let me set some context about TrueFoundry and the beginning of the year 2023.
TrueFoundry is building a cloud-agnostic PaaS on Kubernetes, that standardizes the training and deployment of Machine learning models using production-ready, developer-friendly APIs.
In 2022, we spent time building our team, developing the plumbing layer of the platform across different cloud providers & worked closely with our first few design partners. We developed the core service deployment layer and built the UI, CLI and Python-SDK driven experiences and had experienced the joy of our first customer dollar! We had realized that selling in MLOPs space is difficult because most companies had built “something that worked” and the resistance to change was very high.
On the other hand, we had really validated the problems we were solving-
By this time, we had identified that we were solving a major problem with a huge economic impact but the challenge was- this wasn’t an urgent problem in the customer’s mind. Our learning from this episode-
Solving a major pain point is critical for sustainability, but you can’t fabricate urgency- the customer and the market will decide that. It’s the business world’s equivalent of — laws of physics. Don’t fight that- keep looking!
With that, we started 2023, where we had multiple GTM experiments to run based on our learnings of working with design partners. Giving a few concrete examples of the experiments we ran-
Okay, so a number of partly successful or failed experiments through which we further noticed how prevalent the problems we were trying to solve were but we still could not find our path to identifying- a narrow customer persona, with the exact some problem that is urgent, and can be repeatably identified externally.
This happened, until, everybody in the world wanted to work with LLMs and we were presented with a well-timed opportunity.
LLMs aggregated the demand for us. Everybody wanted to work with LLMs and everyone now “urgently” faced the same problems we were trying to solve.
Accounting for a few of those problems here in the context of LLMs here-
These are some examples but there’s many other similar use cases- like setup of async inferencing, GPU backed notebooks, shared storage drive across notebooks, cold start times of large docker containers etc. that companies found hard to solve for.
Turns out, that all of our customers that come to use for LLMs are experiencing some of these features like reducing dependency of Data Scientists on Infra, or saving costs, or scaling applications across cloud providers avoiding lock-ins realize the same is applicable for other Machine Learning models which are not LLMs, and realistically is also applicable for the rest of the software stack. We see this cross-pollination of use cases for customers who started using us for deploying software or classic ML models and are now seeing benefits with LLMs.
This strengthens our belief that the time we spent in building our core infrastructure, with an opinionated perspective that ML is software and should be deployed similarly, that K8s will win in the long term and that companies will want to avoid vendor lock-ins be it cloud or other software vendors- is paying off for us and our customers.
So concluding using the rocket ship analogy-
If 2023 was the year where we charted the territory and got the thrusters ready, we are looking forward to a 2024 where we will ignite the boosters to propel this rocket ship!!!
Wishing you all a very Happy New Year on behalf of the entire TrueFoundry team! Welcome 2024.
Join AI/ML leaders for the latest on product, community, and GenAI developments