Skip to content

Instantly share code, notes, and snippets.

@philschmid
Created February 6, 2026 13:09
Show Gist options
  • Select an option

  • Save philschmid/b0cd874480d05ba1a7e02816687e07ae to your computer and use it in GitHub Desktop.

Select an option

Save philschmid/b0cd874480d05ba1a7e02816687e07ae to your computer and use it in GitHub Desktop.
name description
gemini-api
Google Gemini API integration for text generation, multimodal AI, and function calling. Use when building applications with Gemini models (text/image/audio processing), implementing function calling or tool use, generating content with system instructions, or working with the Google GenAI SDK in Python/JavaScript/Go/Java/C#.

Gemini API

Build generative AI applications with Google's Gemini models.

Quick Reference

Installation

Python

pip install -q -U google-genai

JavaScript/TypeScript

npm install @google/genai

Go

go get google.golang.org/genai

Basic Usage

Set API key via environment variable GEMINI_API_KEY.

Python

from google import genai

client = genai.Client()
response = client.models.generate_content(
    model="gemini-3-flash-preview",
    contents="Explain how AI works in a few words"
)
print(response.text)

JavaScript

import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({});

async function main() {
  const response = await ai.models.generateContent({
    model: "gemini-3-flash-preview",
    contents: "Explain how AI works in a few words",
  });
  console.log(response.text);
}
main();

Go

package main

import (
    "context"
    "fmt"
    "log"
    "google.golang.org/genai"
)

func main() {
    ctx := context.Background()
    client, err := genai.NewClient(ctx, nil)
    if err != nil {
        log.Fatal(err)
    }
    result, err := client.Models.GenerateContent(
        ctx,
        "gemini-3-flash-preview",
        genai.Text("Explain how AI works in a few words"),
        nil,
    )
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(result.Text())
}

Models

Model Use Case
gemini-3-pro-preview Most intelligent, multimodal understanding, agentic
gemini-3-flash-preview Balanced speed, scale, frontier intelligence
gemini-2.5-flash Price-performance, high volume, agentic use cases
gemini-2.5-pro Complex reasoning with thinking

Capabilities

  • Text Generation: Generate/complete text from prompts
  • Multimodal: Process images, audio, video alongside text
  • Function Calling: Connect to external APIs/tools
  • Structured Output: JSON mode for automated processing
  • Streaming: Real-time incremental responses
  • Long Context: Input millions of tokens
  • Thinking: Enhanced reasoning for complex tasks

System Instructions

from google import genai
from google.genai.types import GenerateContentConfig

client = genai.Client()
response = client.models.generate_content(
    model="gemini-3-flash-preview",
    contents="What is the best approach?",
    config=GenerateContentConfig(
        system_instruction="You are a helpful coding assistant."
    )
)

Function Calling

Define functions the model can call:

# 1. Define function declaration
set_light_values_declaration = {
    "name": "set_light_values",
    "description": "Sets the brightness and color temperature of a light.",
    "parameters": {
        "type": "object",
        "properties": {
            "brightness": {
                "type": "integer",
                "description": "Light level from 0 to 100"
            },
            "color_temp": {
                "type": "string",
                "enum": ["daylight", "cool", "warm"],
                "description": "Color temperature"
            }
        },
        "required": ["brightness", "color_temp"]
    }
}

# 2. Call model with function declarations
from google.genai.types import Tool, FunctionDeclaration

response = client.models.generate_content(
    model="gemini-3-flash-preview",
    contents="Turn the lights to warm at 50%",
    config=GenerateContentConfig(
        tools=[Tool(function_declarations=[set_light_values_declaration])]
    )
)

# 3. Check for function call and execute
if response.candidates[0].content.parts[0].function_call:
    fc = response.candidates[0].content.parts[0].function_call
    # Execute your function with fc.name and fc.args

Function Declaration Schema

{
    "name": "function_name",           # Unique, no spaces (use underscores)
    "description": "Clear explanation", # Crucial for model understanding
    "parameters": {
        "type": "object",
        "properties": {
            "param_name": {
                "type": "string|integer|boolean|array",
                "description": "Parameter purpose",
                "enum": ["option1", "option2"]  # Optional, for fixed values
            }
        },
        "required": ["param_name"]
    }
}

Streaming

for chunk in client.models.generate_content_stream(
    model="gemini-3-flash-preview",
    contents="Write a story"
):
    print(chunk.text, end="")

REST API

curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent" \
  -H "x-goog-api-key: $GEMINI_API_KEY" \
  -H 'Content-Type: application/json' \
  -X POST \
  -d '{
    "contents": [{
      "parts": [{"text": "How does AI work?"}]
    }]
  }'

Additional Resources


Sources scraped: 2026-02-06 from ai.google.dev

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment