GitHub - GoogleCloudPlatform/kubectl-ai: AI powered Kubernetes Assistant
Extracto
AI powered Kubernetes Assistant. Contribute to GoogleCloudPlatform/kubectl-ai development by creating an account on GitHub.
Resumen
Resumen Principal
kubectl-ai es una innovadora herramienta que transforma la interacción con Kubernetes, operando como un agente de IA directamente desde la terminal. Su propósito principal es simplificar y acelerar las operaciones de Kubernetes, permitiendo a los usuarios interactuar con sus clústeres mediante lenguaje natural en lugar de comandos complejos de kubectl. Este agente inteligente interpreta las consultas del usuario, las traduce en comandos apropiados y ejecuta las acciones necesarias, ofreciendo resultados y explicaciones. La capacidad de kubectl-ai para integrarse con una amplia gama de Large Language Models (LLMs), tanto remotos como locales, lo posiciona como una solución versátil y adaptable. Facilita tareas desde la monitorización básica hasta la creación de despliegues y la resolución de problemas, mejorando significativamente la productividad y reduciendo la curva de aprendizaje para la gestión de entornos Kubernetes. La herramienta busca hacer la gestión de contenedores más intuitiva y eficiente para desarrolladores y administradores.
Elementos Clave
-
Amplia Compatibilidad con Modelos de IA: kubectl-ai destaca por su notable flexibilidad al soportar una diversidad de proveedores de LLM. Por defecto, utiliza Gemini (Google), pero también permite la integración con modelos locales como Ollama y llama.cpp, así como servicios externos como Grok de X.AI, Azure OpenAI, OpenAI estándar (GPT-4.1) y otras APIs compatibles con OpenAI (ej. Aliyun Qwen). Esta capacidad asegura que los usuarios puedan elegir el modelo que mejor se adapte a sus requisitos de rendimiento, privacidad o costo, simplemente configurando variables de entorno y especificando el proveedor y el modelo deseado.
-
Modos de Interacción Versátiles: La herramienta ofrece múltiples maneras de interactuar con el clúster. Un modo interactivo permite una conversación continua con la IA, manteniendo el contexto entre preguntas, ideal para sesiones de depuración o exploración prolongadas. Además, los usuarios pueden ejecutar tareas únicas especificando la consulta directamente como argumento (
kubectl-ai "fetch logs..."). Para flujos de trabajo más
Contenido
kubectl-ai
kubectl-ai is an AI powered kubernetes agent that runs in your terminal.
Quick Start
First, ensure that kubectl is installed and configured.
Installation
Quick Install (Linux & MacOS only)
curl -sSL https://raw.githubusercontent.com/GoogleCloudPlatform/kubectl-ai/main/install.sh | bashManual Installation (Linux, MacOS and Windows)
-
Download the latest release from the releases page for your target machine.
-
Untar the release, make the binary executable and move it to a directory in your $PATH (as shown below).
tar -zxvf kubectl-ai_Darwin_arm64.tar.gz chmod a+x kubectl-ai sudo mv kubectl-ai /usr/local/bin/
Usage
Using Gemini (Default)
Set your Gemini API key as an environment variable. If you don't have a key, get one from Google AI Studio.
export GEMINI_API_KEY=your_api_key_here kubectl-ai # Use different gemini model kubectl-ai --model gemini-2.5-pro-exp-03-25 # Use 2.5 flash (faster) model kubectl-ai --quiet --model gemini-2.5-flash-preview-04-17 "check logs for nginx app in hello namespace"
Using AI models running locally (ollama or llama.cpp)
You can use kubectl-ai with AI models running locally. kubectl-ai supports ollama and llama.cpp to use the AI models running locally.
An example of using Google's gemma3 model with ollama:
# assuming ollama is already running and you have pulled one of the gemma models # ollama pull gemma3:12b-it-qat # enable-tool-use-shim because models require special prompting to enable tool calling kubectl-ai --llm-provider ollama --model gemma3:12b-it-qat --enable-tool-use-shim # you can use `models` command to discover the locally available models >> models
Using Grok
You can use X.AI's Grok model by setting your X.AI API key:
export GROK_API_KEY=your_xai_api_key_here
kubectl-ai --llm-provider=grok --model=grok-3-betaUsing Azure OpenAI
You can also use Azure OpenAI deployment by setting your OpenAI API key and specifying the provider:
export AZURE_OPENAI_API_KEY=your_azure_openai_api_key_here export AZURE_OPENAI_ENDPOINT=https://your_azure_openai_endpoint_here kubectl-ai --llm-provider=azopenai --model=your_azure_openai_deployment_name_here # or az login kubectl-ai --llm-provider=openai://your_azure_openai_endpoint_here --model=your_azure_openai_deployment_name_here
Using OpenAI
You can also use OpenAI models by setting your OpenAI API key and specifying the provider:
export OPENAI_API_KEY=your_openai_api_key_here
kubectl-ai --llm-provider=openai --model=gpt-4.1Using OpenAI Compatible API
For example, you can use aliyun qwen-xxx models as follows
export OPENAI_API_KEY=your_openai_api_key_here export OPENAI_ENDPOINT=https://dashscope.aliyuncs.com/compatible-mode/v1 kubectl-ai --llm-provider=openai --model=qwen-plus
- Note:
kubectl-aisupports AI models fromgemini,vertexai,azopenai,openai,grokand local LLM providers such asollamaandllama.cpp.
Run interactively:
The interactive mode allows you to have a chat with kubectl-ai, asking multiple questions in sequence while maintaining context from previous interactions. Simply type your queries and press Enter to receive responses. To exit the interactive shell, type exit or press Ctrl+C.
Or, run with a task as input:
kubectl-ai --quiet "fetch logs for nginx app in hello namespace"Combine it with other unix commands:
kubectl-ai < query.txt # OR echo "list pods in the default namespace" | kubectl-ai
You can even combine a positional argument with stdin input. The positional argument will be used as a prefix to the stdin content:
cat error.log | kubectl-ai "explain the error"
Extras
You can use the following special keywords for specific actions:
model: Display the currently selected model.models: List all available models.version: Display thekubectl-aiversion.reset: Clear the conversational context.clear: Clear the terminal screen.exitorquit: Terminate the interactive shell (Ctrl+C also works).
Invoking as kubectl plugin
Use it via the kubectl plug interface like this: kubectl ai. kubectl will find kubectl-ai as long as it's in your PATH. For more information about plugins please see: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
Examples
# Get information about pods in the default namespace kubectl-ai --quiet "show me all pods in the default namespace" # Create a new deployment kubectl-ai --quiet "create a deployment named nginx with 3 replicas using the nginx:latest image" # Troubleshoot issues kubectl-ai --quiet "double the capacity for the nginx app" # Using Azure OpenAI instead of Gemini kubectl-ai --llm-provider=azopenai --model=your_azure_openai_deployment_name_here --quiet "scale the nginx deployment to 5 replicas" # Using OpenAI instead of Gemini kubectl-ai --llm-provider=openai --model=gpt-4.1 --quiet "scale the nginx deployment to 5 replicas"
The kubectl-ai will process your query, execute the appropriate kubectl commands, and provide you with the results and explanations.
k8s-bench
kubectl-ai project includes k8s-bench - a benchmark to evaluate performance of different LLM models on kubernetes related tasks. Here is a summary from our last run:
| Model | Success | Fail |
|---|---|---|
| gemini-2.5-flash-preview-04-17 | 10 | 0 |
| gemini-2.5-pro-preview-03-25 | 10 | 0 |
| gemma-3-27b-it | 8 | 2 |
| Total | 28 | 2 |
See full report for more details.
Start Contributing
We welcome contributions to kubectl-ai from the community. Take a look at our
contribution guide to get started.
Note: This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.
Fuente: GitHub
