MCP & AI Agents: How to Connect Astrology APIs to Claude, GPT, and Gemini (2026 Guide)
AI agents need external tools to be useful. Astrology is one of the richest structured data domains available — birth charts, planetary positions, dasha timelines, dosha analysis. This guide shows exactly how to wire an astrology API into any agent framework, what the Model Context Protocol means in practice, and why the data layer matters as much as the AI layer.
Contents
1. What is MCP and why it matters for astrology APIs 2. How AI agents use astrology data: real use cases 3. Vedika's AI-native design: llms.txt, OpenAPI, ai-plugin.json 4. Code: Connecting Vedika to an AI agent via OpenAPI tool calling 5. Code: Natural language API vs raw endpoint APIs 6. Why in-house calculations matter for AI accuracy 7. Built-in AI vs MCP wrapper: what the difference means 8. Getting started guide FAQ1. What is MCP and Why It Matters for Astrology APIs
Model Context Protocol (MCP) is an open standard, originally developed by Anthropic, that defines how AI agents connect to external data sources and tools. Before MCP, every agent framework had its own proprietary way of registering tools — OpenAI had function calling schemas, LangChain had tool wrappers, AutoGPT had plugin configs. MCP is an attempt to standardize this: one protocol, any agent, any tool provider.
At its core, MCP works like this. An AI agent receives a user message. The agent's reasoning model decides it needs external data to answer. It calls a registered tool — which is just an HTTP endpoint described in a machine-readable schema. The tool returns structured JSON. The model uses that JSON to construct its final response. The user sees a well-grounded answer backed by real data, not the model's internal knowledge.
For astrology, this matters enormously. Planetary positions change daily. Dasha periods are calculated from birth data. Nakshatra lords follow a precise 27-star sequence. None of this can be reliably retrieved from a language model's training data alone — it changes per person and per date. An astrology agent without tool access is just a model pattern-matching on generic horoscope text it saw during training. That is not useful for anyone who wants real calculations for a specific birth date and location.
The practical definition: MCP-compatible means your API publishes a machine-readable description of its endpoints that agent frameworks can auto-discover and register as callable tools. The three files that accomplish this are an OpenAPI spec, an llms.txt file, and an ai-plugin.json manifest. Vedika ships all three.
The astrology API space is beginning to segment on this axis. APIs that return raw JSON arrays of numbers are calculation engines — they are useful if a developer writes the interpretation layer manually. APIs designed for AI integration go further: structured schemas, natural language endpoints, clear field descriptions, and published discovery files. The distinction is not marketing; it changes how much work a developer has to do to make an agent that produces trustworthy answers.
2. How AI Agents Use Astrology Data: Real Use Cases
Before writing any code, it helps to understand the actual agent patterns that work well with astrology APIs. These are not hypothetical — they are the integration patterns Vedika's B2B customers have built in production.
Personal astrology assistants
A user provides their birth details once. The agent stores the birth data and calls the birth chart endpoint. Every subsequent conversation — "how is my career this year", "should I travel in April", "what does Mars in my 7th house mean" — is answered by calling the relevant dasha, transit, or placement endpoint with those stored birth details and feeding the response to the AI for interpretation. The agent's job is routing questions to the right endpoints; the API's job is returning accurate data; the AI's job is turning structured JSON into plain language explanation.
Matrimonial and matchmaking platforms
Two birth charts, one compatibility query. The agent calls the guna milan endpoint, receives the 36-point score and individual guna breakdown, and the AI explains which dimensions scored well and which need attention. This works as an agent tool because the response is structured and repeatable — not generated from pattern matching.
Daily horoscope automation
An agent with access to the daily prediction endpoint and transit endpoints can generate personalized daily horoscopes for each user — not the generic 12-sign horoscopes, but calculations specific to each birth chart. At scale, this is what differentiates an AI-powered astrology app from a content reuse engine.
Muhurta and timing tools
Users ask "when is a good time to start this business" or "find me an auspicious date for the wedding in October". The agent calls the panchang and muhurta endpoints for a date range, filters by the user's preferences, and presents ranked options with reasoning. This is a task that would take a jyotishi hours manually and an AI agent minutes with the right API access.
Enterprise CRM enrichment
Some astrology SaaS companies run background agents that enrich customer profiles. When a user signs up, the agent calls the birth chart and dasha endpoints in the background and stores the results. Every subsequent personalization decision — which content to show, which consultation to recommend, which timing to suggest — is informed by that person's actual planetary data.
3. Vedika's AI-Native Design: llms.txt, OpenAPI, ai-plugin.json
Most astrology APIs were designed for developers writing traditional request-response code. The schema was an afterthought; the documentation was for humans. Vedika was designed from the start for the agent era, which means three things are published as first-class artifacts.
OpenAPI 3.1 specification
Every Vedika endpoint is documented in a machine-readable OpenAPI spec available at https://api.vedika.io/openapi.json. This is not a generated stub — it includes parameter descriptions, response schemas, examples, and authentication headers. Agent frameworks like LangChain, CrewAI, LlamaIndex, and OpenAI Assistants can import this spec and register all 108+ endpoints as callable tools with zero manual tool definition work.
llms.txt
The llms.txt standard, proposed in 2024, gives AI crawlers and agent frameworks a structured summary of what a site or API offers. Vedika's https://vedika.io/llms.txt describes the API's capabilities, endpoint categories, authentication method, pricing model, and links to the full spec. When an agent framework scans for available tools, it finds Vedika's capabilities without scraping unstructured HTML.
ai-plugin.json manifest
The ai-plugin.json file at https://api.vedika.io/ai-plugin.json provides the metadata that plugin-compatible agent systems need: API name, description, authentication type, endpoint for the OpenAPI spec, and contact information. This is the same format originally defined for ChatGPT plugins, now widely adopted across agent frameworks.
What this means for integration: With these three files in place, adding Vedika to an MCP-compatible agent is a configuration step, not a development task. You point the agent framework at the OpenAPI spec URL, provide your API key, and the agent can call birth chart, dasha, planetary, panchang, and AI query endpoints without any further tool definition work on your side.
Beyond the discovery files, Vedika also ships a dedicated AI query endpoint (/api/vedika/query) that accepts a natural language question and birth details in a single call and returns a validated, interpreted response. This is different from a calculation endpoint — it is the entire agent loop compressed into one HTTP call, useful when you want Vedika to handle the interpretation rather than routing raw JSON to your own model.
4. Code: Connecting Vedika to an AI Agent via OpenAPI Tool Calling
The cleanest integration path is loading Vedika's OpenAPI spec directly into your agent framework. Here are working examples for the two most common patterns: OpenAI Assistants API and a LangChain agent.
Pattern A: OpenAI Assistants API with OpenAPI tool spec
// Register Vedika as a tool source in an OpenAI Assistant
// The assistant will auto-discover all 108+ endpoints from the spec
import OpenAI from 'openai';
import fetch from 'node-fetch';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// Load Vedika's OpenAPI spec
const specResponse = await fetch('https://api.vedika.io/openapi.json');
const openApiSpec = await specResponse.json();
// Create an assistant with Vedika as a tool source
const assistant = await client.beta.assistants.create({
name: 'Astrology Advisor',
instructions: `You are an astrology advisor. When users ask about their birth chart,
planetary positions, dasha periods, or compatibility, use the Vedika astrology tools.
Always call the appropriate endpoint with the user's birth data.
Present results in clear, accessible language without jargon.`,
model: 'gpt-4o',
tools: [
{
type: 'function',
function: {
name: 'get_birth_chart',
description: 'Calculate a complete Vedic birth chart with planetary positions, houses, nakshatras, and active yogas',
parameters: {
type: 'object',
properties: {
datetime: {
type: 'string',
description: 'Birth datetime in ISO format: YYYY-MM-DDTHH:mm:ss'
},
latitude: {
type: 'number',
description: 'Birth latitude (e.g., 19.0760 for Mumbai)'
},
longitude: {
type: 'number',
description: 'Birth longitude (e.g., 72.8777 for Mumbai)'
},
timezone: {
type: 'string',
description: 'UTC offset in format +HH:MM (e.g., +05:30 for IST)'
}
},
required: ['datetime', 'latitude', 'longitude', 'timezone']
}
}
}
]
});
// Tool execution handler
async function executeTool(toolName, params) {
const endpointMap = {
get_birth_chart: '/v2/astrology/birth-chart',
get_dasha: '/v2/astrology/vimshottari-dasha',
get_panchang: '/v2/astrology/panchang',
get_compatibility: '/v2/astrology/guna-milan'
};
const endpoint = endpointMap[toolName];
const response = await fetch(`https://api.vedika.io${endpoint}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.VEDIKA_API_KEY
},
body: JSON.stringify(params)
});
return response.json();
}
// Run a conversation turn
async function chat(threadId, userMessage) {
await client.beta.threads.messages.create(threadId, {
role: 'user',
content: userMessage
});
const run = await client.beta.threads.runs.createAndPoll(threadId, {
assistant_id: assistant.id
});
// Handle tool calls
if (run.status === 'requires_action') {
const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
const toolOutputs = await Promise.all(
toolCalls.map(async (call) => ({
tool_call_id: call.id,
output: JSON.stringify(
await executeTool(call.function.name, JSON.parse(call.function.arguments))
)
}))
);
await client.beta.threads.runs.submitToolOutputsAndPoll(
threadId, run.id, { tool_outputs: toolOutputs }
);
}
const messages = await client.beta.threads.messages.list(threadId);
return messages.data[0].content[0].text.value;
}
Pattern B: LangChain agent with Vedika OpenAPI toolkit
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit
from langchain_community.utilities.requests import TextRequestsWrapper
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
import requests, os
VEDIKA_API_KEY = os.environ["VEDIKA_API_KEY"]
# Load Vedika's OpenAPI spec
spec = requests.get("https://api.vedika.io/openapi.json").json()
# Create a requests wrapper with Vedika auth headers
requests_wrapper = TextRequestsWrapper(
headers={
"X-API-Key": VEDIKA_API_KEY,
"Content-Type": "application/json"
}
)
# Use LangChain's OpenAPI toolkit to register all endpoints
# This auto-discovers birth-chart, planets, dasha, panchang, etc.
toolkit = RequestsToolkit(
requests_wrapper=requests_wrapper,
allow_dangerous_requests=True # Required for POST requests
)
tools = toolkit.get_tools()
# Define the agent
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", """You are a Vedic astrology assistant. You have access to Vedika API tools
for calculating birth charts, planetary positions, dasha periods, panchang data,
and compatibility scores. When a user provides birth information, call the appropriate
Vedika API endpoint to get accurate calculated data before responding.
Base all interpretations on the data returned by the tools, not general knowledge."""),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad")
])
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Example: Ask about a birth chart
response = agent_executor.invoke({
"input": "My birth details: June 15, 1990, 2:30 PM, Mumbai, India. What is my current dasha period and what does it mean?",
"chat_history": []
})
print(response["output"])
5. Code: Natural Language API vs Raw Endpoint APIs
There are two fundamentally different ways to use Vedika in an agent. Understanding the tradeoff helps you choose the right architecture for your use case.
Approach A: Raw calculation endpoints (structured JSON)
Your agent calls individual calculation endpoints and passes the JSON to your AI model for interpretation. You control the interpretation logic. This is the MCP tool-calling pattern.
// Raw endpoint approach: agent calls calculation, AI interprets
async function answerAstrologyQuestion(userQuestion, birthData) {
// Step 1: Get structured chart data from Vedika
const chartResponse = await fetch('https://api.vedika.io/v2/astrology/birth-chart', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.VEDIKA_API_KEY
},
body: JSON.stringify({
datetime: birthData.datetime, // '1990-06-15T14:30:00'
latitude: birthData.latitude, // 19.0760
longitude: birthData.longitude, // 72.8777
timezone: birthData.timezone, // '+05:30'
ayanamsa: 'lahiri'
})
});
const chart = await chartResponse.json();
// Step 2: Get dasha data separately
const dashaResponse = await fetch('https://api.vedika.io/v2/astrology/vimshottari-dasha', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.VEDIKA_API_KEY
},
body: JSON.stringify({
datetime: birthData.datetime,
latitude: birthData.latitude,
longitude: birthData.longitude,
timezone: birthData.timezone
})
});
const dasha = await dashaResponse.json();
// Step 3: Pass structured data to your AI model with the user's question
// chart.planets: [{name, sign, house, degree, retrograde, nakshatra, dignity}, ...]
// dasha.currentDasha: {mahadasha, antardasha, start, end}
// dasha.upcoming: [{lord, start, end}, ...]
const aiPrompt = `
User question: ${userQuestion}
Birth chart data (computed, accurate):
Planets: ${JSON.stringify(chart.planets, null, 2)}
Houses: ${JSON.stringify(chart.houses, null, 2)}
Active Yogas: ${JSON.stringify(chart.yogas, null, 2)}
Current Dasha:
Mahadasha: ${dasha.currentDasha.mahadasha} (${dasha.currentDasha.start} to ${dasha.currentDasha.end})
Antardasha: ${dasha.currentDasha.antardasha}
Answer the user's question based only on the computed chart data above.
`;
// Send to your AI model (any model, any provider)
return await yourAIModel.generate(aiPrompt);
}
Approach B: Vedika's native AI endpoint (one call, interpreted response)
Your agent sends the birth details and the user's question in one call. Vedika handles chart generation, multi-layer validation, and AI interpretation internally, returning a ready-to-display response. This eliminates multiple round trips and the interpretation burden.
// Native AI endpoint approach: one call, fully interpreted response
async function askVedikaAI(userQuestion, birthData) {
const response = await fetch('https://api.vedika.io/api/vedika/query', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.VEDIKA_API_KEY
},
body: JSON.stringify({
question: userQuestion,
birthDetails: {
datetime: birthData.datetime, // '1990-06-15T14:30:00'
latitude: birthData.latitude, // 19.0760
longitude: birthData.longitude, // 72.8777
timezone: birthData.timezone // '+05:30'
},
system: 'vedic', // 'vedic', 'western', or 'kp'
language: 'en' // or 'hi', 'ta', 'te', 'bn', 'gu', etc.
})
});
const result = await response.json();
// result.response: fully interpreted, validated astrology response
// result.chart: computed planetary data (for your reference/storage)
// result.confidence: 0.0-1.0 validation score
// result.corrections: any errors the validator caught and fixed
return result.response;
}
// Usage — single call replaces the multi-step approach above
const answer = await askVedikaAI(
'What is my current dasha period and how will it affect my career?',
{
datetime: '1990-06-15T14:30:00',
latitude: 19.0760,
longitude: 72.8777,
timezone: '+05:30'
}
);
When to use which: Use raw endpoints (Approach A) when you want your own AI model to do the interpretation, you need maximum control over the response format, or you are building a data pipeline that stores chart data in your own database. Use the native AI endpoint (Approach B) when you want a turnkey interpreted response, you are building a user-facing chat interface, or you want Vedika's validation layer to catch hallucinations before they reach your users.
6. Why In-House Calculations Matter for AI Accuracy
This is the part of astrology API selection that most developers underestimate, and it directly determines whether your AI agent produces trustworthy answers.
The principle is simple: an AI model is only as accurate as the data it receives. If the tool response contains incorrect planetary positions, the model will construct a confident, fluent explanation of wrong information. Language models are very good at coherent storytelling; they are not able to detect mathematical errors in the input data they are given. The validation has to happen before the AI ever sees the data.
The two-hop error problem
Some astrology APIs act as proxies — they accept your request and forward it to another calculation service, then return the result. This creates a two-hop architecture: your agent calls API A, API A calls API B, the result travels back. Errors at either hop compound. If API B returns the wrong ayanamsa correction or places a planet in the wrong house, API A will return that error, your agent will call your AI with that error, and your AI will interpret the error confidently and fluently.
Vedika runs Swiss Ephemeris in-house. The planetary calculations use validated Lahiri ayanamsa with the ICRC standard constant — the same precision reference used by professional Jyotish software. There is no downstream dependency to fail. What the API returns is what the ephemeris computed. When your agent calls Vedika, you are one hop from the source.
The validation layer that protects agent responses
When using Vedika's native AI endpoint, there is a second protection layer: the response validator. After the AI generates its interpretation, the validator cross-checks every factual claim in the response against the computed chart data. If the AI says "Saturn is retrograde" but Saturn is direct in the computed data, the validator catches it and corrects it before the response leaves the server. Confidence scores (0.0 to 1.0) are returned with every response so you can monitor interpretation quality programmatically.
| Calculation approach | Error risk | AI response quality |
|---|---|---|
| Proxy API (forwards to 3rd party) | High (two failure points) | Unpredictable — errors propagate silently |
| In-house ephemeris, no AI validation | Low at data layer, higher at AI layer | Good data, unvalidated interpretation |
| In-house ephemeris + response validator | Lowest (validated at both layers) | Highest — errors corrected before delivery |
The ayanamsa problem specifically
Ayanamsa is the angular correction applied in Vedic astrology to convert tropical coordinates to sidereal. Different ayanamsa systems (Lahiri, Raman, Krishnamurti) place planets in different signs. A planet can be in Aries by one system and Taurus by another — literally a different sign. An AI agent that receives data computed with the wrong ayanamsa, or from an API that does not document which ayanamsa it uses, will interpret the wrong planetary placements. There is no way to detect this error from the AI layer. It requires verification at the calculation layer, where the mathematics can be checked against a known reference.
7. Built-In AI vs MCP Wrapper: What the Difference Means
The market is currently segmenting on this architectural choice, and it is worth understanding clearly before committing to an integration approach.
The MCP wrapper approach
An astrology API with "MCP support" that consists of a published OpenAPI spec is a wrapper — it gives your agent structured access to calculation data, but the interpretation still happens in your AI model or in a separately connected AI service. This is a legitimate architecture. The workflow is: user question → agent → calculation API (tool call) → structured JSON → external AI → interpreted response → user.
The wrapper approach gives you maximum control and flexibility. You choose which AI model interprets the data. You control the prompt, the persona, the response format. If you already have an AI stack and just need accurate astrological data, this is the right approach and Vedika's OpenAPI spec supports it fully.
The built-in AI approach
Vedika's native query endpoint goes further than a calculation wrapper. When you call /api/vedika/query, the following happens inside a single API call:
Birth chart is computed from Swiss Ephemeris (planets, houses, nakshatras, yogas, divisional charts, dasha periods).
A complete chart summary is prepended to the AI prompt — every planetary fact the AI can reference, generated from code, not from the model's memory.
System isolation rules enforce that Vedic, Western, and KP systems never mix in the same response.
The AI generates an interpretation grounded only in the chart data — it cannot contradict the computed positions.
The response validator checks every factual claim against the computed data, corrects errors, and scores confidence.
The validated response is returned to your application — ready to display to the user.
| Dimension | Vedika built-in AI | MCP wrapper to external AI |
|---|---|---|
| Integration complexity | One API call | Multiple calls + orchestration |
| Hallucination protection | Built-in validator (v7.1) | Requires your own prompt engineering |
| System isolation (Vedic/Western/KP) | Enforced at API level | Requires your own prompt rules |
| Model flexibility | Vedika Intelligence Engine | Any model you choose |
| Response format control | Vedika default format | Full control via your prompts |
| Language support | 30 languages (API param) | Depends on your model |
| Streaming support | SSE streaming available | Depends on your model |
| Latency | Single round trip | Multiple round trips |
The two approaches are not mutually exclusive. A common architecture is: use Vedika's native AI endpoint for user-facing chat responses (single call, validated, streamed), and use Vedika's calculation endpoints as tools in a background agent that populates a customer's data profile. Both paths go through the same accurate ephemeris.
8. Getting Started Guide
Here is the fastest path from zero to a working astrology agent.
Step 1: Get an API key and test with the sandbox
Go to vedika.io/console and create an account. Before your subscription is active, test with the free sandbox at vedika.io/sandbox — it has 65 mock endpoints that respond with realistic astrology data without requiring authentication or incurring costs. This is the right place to build and test your agent tool definitions.
# Test the sandbox — no auth required
curl -X POST https://vedika.io/sandbox/v2/astrology/birth-chart \
-H "Content-Type: application/json" \
-d '{
"datetime": "1990-06-15T14:30:00",
"latitude": 19.0760,
"longitude": 72.8777,
"timezone": "+05:30"
}'
# Returns realistic mock planetary data — use this to build your tool definitions
Step 2: Load the OpenAPI spec into your agent framework
Once you have a key, fetch the spec and register it. For most frameworks, this is a one-liner.
# Python — LangChain
import requests
spec = requests.get("https://api.vedika.io/openapi.json").json()
# Pass spec to your agent toolkit
# JavaScript — raw fetch
const spec = await fetch('https://api.vedika.io/openapi.json').then(r => r.json());
# The spec documents all 108+ endpoints with parameter schemas,
# response shapes, authentication headers, and examples.
# Most agent frameworks (LangChain, CrewAI, LlamaIndex, AutoGen)
# can import this directly.
Step 3: Start with five core endpoints
You do not need all 108 endpoints on day one. For most astrology agents, these five cover the majority of user questions:
| Endpoint | Use case |
|---|---|
| /v2/astrology/birth-chart | Complete natal chart (planets, houses, nakshatras, yogas) |
| /v2/astrology/vimshottari-dasha | Current and upcoming dasha periods |
| /v2/astrology/planets | Lightweight planetary positions only |
| /v2/astrology/panchang | Daily panchang (tithi, nakshatra, yoga, karana, vara) |
| /v2/astrology/guna-milan | Compatibility score for two birth charts |
Step 4: Use the native AI endpoint for chat interfaces
If you are building a chat UI, the native AI endpoint returns validated, interpreted responses in 30 languages. Register it as a single tool in your agent that takes a question and birth details, and returns a ready-to-display string.
// Register as a single agent tool
const vedikaQueryTool = {
name: "ask_vedika_ai",
description: `Ask Vedika AI an astrology question. Provide the user's birth details
and their question. Returns a complete, validated astrology interpretation.
Use this tool for any question about the user's birth chart, planetary positions,
dasha periods, doshas, yogas, compatibility, or timing.`,
parameters: {
type: "object",
properties: {
question: {
type: "string",
description: "The astrology question to answer"
},
datetime: { type: "string", description: "Birth datetime: YYYY-MM-DDTHH:mm:ss" },
latitude: { type: "number", description: "Birth latitude" },
longitude: { type: "number", description: "Birth longitude" },
timezone: { type: "string", description: "UTC offset: +HH:MM" },
system: {
type: "string",
enum: ["vedic", "western", "kp"],
description: "Astrology system to use",
default: "vedic"
},
language: {
type: "string",
description: "Response language code (en, hi, ta, te, bn, gu, etc.)",
default: "en"
}
},
required: ["question", "datetime", "latitude", "longitude", "timezone"]
}
};
Step 5: Handle streaming for better UX
For chat interfaces, use the streaming endpoint so responses appear incrementally rather than after a full generation delay. Vedika supports Server-Sent Events (SSE) on the AI query endpoint.
// Streaming AI response for chat UI
async function* streamVedikaResponse(question, birthData) {
const response = await fetch('https://api.vedika.io/api/vedika/query', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.VEDIKA_API_KEY,
'Accept': 'text/event-stream'
},
body: JSON.stringify({
question,
birthDetails: birthData,
stream: true
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(l => l.startsWith('data: '));
for (const line of lines) {
const data = line.slice(6);
if (data === '[DONE]') return;
try {
const parsed = JSON.parse(data);
if (parsed.delta) yield parsed.delta;
} catch {}
}
}
}
// In your chat handler:
for await (const token of streamVedikaResponse(userMessage, birthData)) {
sendToClient(token); // Push tokens to UI as they arrive
}
Frequently Asked Questions
What is MCP and how does it apply to astrology APIs?
MCP (Model Context Protocol) is an open standard for connecting AI agents to external tools. For astrology APIs, it means an agent can call endpoints like birth chart or dasha as native tools and receive structured JSON for interpretation. Vedika supports MCP-compatible integration through its OpenAPI spec, llms.txt, and ai-plugin.json.
Does Vedika have native MCP support?
Vedika ships the three files agent frameworks need: OpenAPI 3.1 spec at api.vedika.io/openapi.json, llms.txt at vedika.io/llms.txt, and ai-plugin.json at api.vedika.io/ai-plugin.json. These enable auto-discovery and tool registration without manual definition. Vedika also has a built-in AI endpoint that returns validated interpretations — something no external MCP wrapper provides.
Can I use Vedika with GPT-4o function calling?
Yes. Import the Vedika OpenAPI spec into your GPT-4o function definitions or use the LangChain OpenAPI toolkit. The spec documents all parameters and response schemas. Test in the free sandbox at vedika.io/sandbox before going live with a paid key.
Why does in-house calculation matter for AI accuracy?
An AI model cannot detect mathematical errors in the data it receives. If the astrology API returns wrong planetary positions (wrong ayanamsa, proxy chain error, stale data), the AI will produce a confident and fluent interpretation of incorrect facts. Vedika runs Swiss Ephemeris in-house and validates AI responses against computed data before delivery — errors are caught before they reach your users.
Is there a free way to test before subscribing?
Yes. The sandbox at vedika.io/sandbox has 65 mock endpoints that return realistic astrology data without authentication or billing. Use the sandbox to build and test all agent tool definitions. Switch to your live API key when you are ready for production.
What languages does Vedika support for AI responses?
30 languages, including Hindi, Tamil, Telugu, Bengali, Gujarati, Kannada, Malayalam, Marathi, Punjabi, and major global languages. Pass the language code in the API request. The AI generates the response directly in the target language — it is not a post-translation of an English response.
Connect your AI agent to Vedika
108+ astrology endpoints. Built-in AI with hallucination protection. OpenAPI spec, llms.txt, and ai-plugin.json published. Free sandbox for agent development.
Starter plan from $12/month. No free tier, no demo tokens — every query uses in-house Swiss Ephemeris with validated calculations.