Mentron Labs
Built for AI Teams

Spin up powerful GPU instances in seconds. EU data residency by default, GDPR/AI‑Act aligned.

mentronlabs – provision.sh
$ ml spinup --gpu H100 --region eu-central --jupyter
Creating VPC eu-central‑vpc‑01 … done
Provisioning node h100‑xlarge … done
Mounting volume projects:/mnt/data … done
Starting Jupyter at https://eu.mlnb.io/abcd
SSH: ssh ubuntu@eu.mlnb.io -p 2231
$ _
Supported Models

Run open & custom models instantly

Mentron Cloud supports fine-tuning, inference, and deployment of popular open-source models. Connect your Hugging Face account or import checkpoints directly.

Features

Everything you need to launch and manage GPUs

Built for AI teams that care about control, performance, and compliance. Every instance runs in EU-based regions with private networking, fast images, and full observability.

🧪

Jupyter & Terminal

Launch a ready notebook + terminal with CUDA, snapshots, SSH.

🌍

EU Hosted

Austria/Germany default. Private networks, audit logs.

🛡️

GDPR / AI-Act Compliant

Role-based access, data boundaries, exportable logs.

🚀

Fast Images

Curated, cached images for PyTorch, JAX, vLLM, LLM bases.

Pricing

Transparent hourly rates. Discounts for commitments.

Pick from on-demand or reserved. Stop any time and only pay for what you use. No hidden storage or egress surprises.

NVIDIA RTX 3060

0.50/hr

Efficient for model training, inference, and GPU compute tasks. Ideal for fine-tuning, diffusion, and prototyping workloads.

Launch

NVIDIA A100 (40/80GB)

Coming soon
1.50/hr

For production-grade training and high-throughput inference with NVLink.

Launch
FAQ

Frequently Asked Questions

Short answers to help you decide quickly. Reach out if you need custom terms.

How do I get started?
Sign up for our waitlist to access Mentron Labs early.
If you're a university or organization seeking broader access, contact us at contact@mentronlabs.com.
What is Mentron Labs?
Mentron Labs is a secure, EU-based GPU cloud for AI teams. It provides fast provisioning, training, and inference for open and custom models — without the complexity of managing infrastructure.
How does fine-tuning work?
Mentron Labs supports efficient adapter-based fine-tuning methods like LoRA, which train small adapter layers instead of the full model. This approach reduces compute requirements while maintaining performance.
What data do I need to bring?
You can bring your own supervised or reinforcement learning dataset.
Which models are supported?
Mentron currently supports a wide range of open-source models — from compact architectures like Llama-3.2-1B or Mistral 7B. We continuously expand our lineup based on developer and enterprise demand.
How is my data handled?
Your data is used solely to train or fine-tune your models. We never use your datasets to train our internal systems. All workloads run within EU-based data centers under strict GDPR and AI-Act compliance.
Can I export my trained models?
Yes — you can download checkpoints, adapters, or full model weights directly from your Mentron Labs dashboard or via the Mentron API.
Will Mentron be free?
Mentron Labs will be free during the beta phase. We’ll introduce transparent, usage-based pricing soon — you’ll only pay for active GPU hours with no hidden fees.
What teams say

A fast path from idea to training run

Spin‑up to training in under five minutes. The cost alerts saved us on day one.
Lea M., ML Engineer @ early‑stage startup
EU residency + SSO made security sign‑off painless. Our researchers love the Jupyter defaults.
Dr. K. Steiner, Research Lead
Great 4090 availability for fine‑tuning. Snapshots help us keep experiments reproducible.
Marco M., Data Scientist
Simple pricing and the spending guardrails give our finance team peace of mind.
Flor B., Ops