Skip to content

Martian

Nuget package dotnet License: MIT Discord

Features

  • C# SDK for the Martian Gateway API generated using AutoSDK
  • Intelligent LLM routing across 200+ AI models for cost, quality, and latency optimization
  • OpenAI-compatible chat completions with Martian-specific router parameters
  • Anthropic-compatible Messages API support
  • Model listing with pricing and reliability information
  • MEAI AIFunction tools for integration with any IChatClient
  • All modern .NET features - nullability, trimming, NativeAOT, etc.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
using Martian;

using var client = new MartianClient(apiKey);

// Chat completion with cost optimization
var response = await client.CreateChatCompletionAsync(
    model: "openai/gpt-4.1-nano",
    messages: [new ChatCompletionMessage
    {
        Role = ChatCompletionMessageRole.User,
        Content = "Hello!",
    }],
    maxCost: 0.01f,
    willingnessToPay: 0.1f);

Chat Completion

Basic example showing how to create a chat completion via the Martian Gateway.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
using var client = new MartianClient(apiKey);

// Send a chat completion request through the Martian Gateway
var response = await client.CreateChatCompletionAsync(
    model: "openai/gpt-4.1-nano",
    messages: [new ChatCompletionMessage
    {
        Role = ChatCompletionMessageRole.User,
        Content = "What is 2 + 2?",
    }]);

List Models

Example showing how to list all available models with pricing information.

1
2
3
4
5
6
7
using var client = new MartianClient(apiKey);

// List all available models on the Martian Gateway
var response = await client.ListModelsAsync();

// Each model includes pricing and reliability information
var firstModel = response.Data[0];

Cost-Optimized Routing

Example showing how to use Martian's router parameters for cost-optimized model selection.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
using var client = new MartianClient(apiKey);

// Use the router with cost constraints to optimize model selection.
// The models parameter restricts which models the router can choose from.
// willingness_to_pay controls cost vs. quality trade-off.
var response = await client.CreateChatCompletionAsync(
    model: "openai/gpt-4.1-nano",
    messages: [new ChatCompletionMessage
    {
        Role = ChatCompletionMessageRole.User,
        Content = "Explain the concept of machine learning in one sentence.",
    }],
    models: ["openai/gpt-4.1-nano", "openai/gpt-4.1-mini"],
    maxCost: 0.01f,
    willingnessToPay: 0.1f);

Support

Priority place for bugs: https://github.com/tryAGI/Martian/issues
Priority place for ideas and general questions: https://github.com/tryAGI/Martian/discussions
Discord: https://discord.gg/Ca2xhfBf3v

Acknowledgments

JetBrains logo

This project is supported by JetBrains through the Open Source Support Program.