SlashML helps companies deploy, manage and fine-tuning open-source models in their private cloud. The problem is severe for regulated industries because they cannot consume off-the-shelf models.
Current Status
300 MRR, 42 beta-users, 10 design partners, 2 LOIs (5k MRR each). We use a top-down sales approach. All of these beta-users are technical heads-of-AI in the regulated industries. We convert them to design partners, then LOIs, then paying customers.
Problem or Opportunity
We are solving the problem of deploying open-source models in the clients private VPC, which is currently managed by a team of devops and ML engineers.
The problem exists because of numerous technical challenges, including complex infrastructure setup, model management, scalability, performance optimization, security, and compliance.
These challenges make LLM deployment a time-consuming and difficult process for many companies, especially those lacking specialized skills in AI infrastructure.
Solution (product or service)
SlashML is a dashboard that simplifies the deployment, fine-tuning, and management of open-source language models within the company's own cloud. This platform streamlines the Devops operations of deploying large models to GPUs in the private cloud, which usually requires multiple devops and ML-engineers.
The dashboard connects to multiple cloud providers, allowing users to manage instances across GCP, Azure, and AWS from a unified interface. Once fine-tuned, models can be deployed through a UI, without worrying about inference speed or GPU costs.
Business model
We have a Saas business model, we charge 300 per month per user for access to the dashboard. For more than 3 we charge 5k per month. This is ofcourse the starting point. Our reasoning is that, 60k per years is 1/4th the price of a single ML engineer.