No. Absolutely not. This is a hard line in our Terms of Service.
Training, fine-tuning, distilling, or otherwise using Vedika outputs as training data for any AI model — your own, your client's, or a third party's — is a material breach of our Terms. This applies to every endpoint: V1 AI chat, V2 structured calculations, voice outputs, PDF reports, and embeddings.
What we actively enforce:
- Response watermarking. Every text output carries invisible provenance markers. We can identify a Vedika-derived response even after paraphrasing, translation, or model distillation.
- Behavioral baselines. Your key's request shape is continuously compared against a per-account baseline. Sudden patterns consistent with bulk scraping or synthetic-data harvesting — same prompts at scale, uniform temperature sweeps, training-set-shaped sampling — trigger abuse detection.
- Burst and rate-shape limits. We rate-limit not just requests-per-minute but also per-hour sustained patterns typical of dataset construction.
- Hard block. Confirmed violations result in
API_KEY_SUSPENDED on all your keys, immediate forfeiture of remaining wallet balance, permanent ban of the billing entity, and legal referral under your subscription agreement. No refunds for time already used.
What IS allowed: using Vedika outputs in your live end-user product (chat app, voice assistant, report generator, etc.), caching responses per-user for your own app's UX, and aggregate analytics. In short: serve your users, don't build a competitor.
If you're evaluating for a research or academic use, enterprise@vedika.ai — we have a separate licensing path and will reply in 48h. Unauthorized training use costs more later than a 10-minute email now.