Back to Skills

Firebase AI Logic

Official skill for integrating Firebase AI Logic (Gemini API) into web applications. Covers setup, multimodal inference, structured output, and security.

$ npx promptcreek add firebase-ai-logic

Auto-detects your installed agents and installs the skill to each one.

What This Skill Does

Firebase AI Logic allows developers to integrate generative AI capabilities into their mobile and web applications using client-side SDKs. It enables direct calls to Gemini models from the app without requiring a dedicated backend. It supports both the Gemini Developer API and the Vertex AI Gemini API.

When to Use

  • Add gen AI to mobile apps.
  • Add gen AI to web apps.
  • Call Gemini models directly from the app.
  • Prototype with the Gemini Developer API.
  • Scale with the Vertex AI Gemini API.
  • Initialize AI Logic SDK.

Key Features

Supports Gemini Developer API and Vertex AI Gemini API.
Uses client-side SDKs for integration.
Requires Node.js 16+ and npm.
Part of the standard Firebase Web SDK.
Uses `firebase init` to enable AI Logic.
Automatically enables the Gemini Developer API.

Installation

Run in your project directory:
$ npx promptcreek add firebase-ai-logic

Auto-detects your installed agents (Claude Code, Cursor, Codex, etc.) and installs the skill to each one.

View Full Skill Content

Firebase AI Logic Basics

Overview

Firebase AI Logic is a product of Firebase that allows developers to add gen AI to their mobile and web apps using client-side SDKs. You can call Gemini models directly from your app without managing a dedicated backend. Firebase AI Logic, which was previously known as "Vertex AI for Firebase", represents the evolution of Google's AI integration platform for mobile and web developers.

It supports the two Gemini API providers:

  • Gemini Developer API: It has a free tier ideal for prototyping, and pay-as-you-go for production
  • Vertex AI Gemini API: Ideal for scale with enterprise-grade production readiness, requires Blaze plan

Use the Gemini Developer API as a default, and only Vertex AI Gemini API if the application requires it.

Setup & Initialization

Prerequisites

  • Before starting, ensure you have Node.js 16+ and npm installed. Install them if they aren’t already available.
  • Identify the platform the user is interested in building on prior to starting: Android, iOS, Flutter or Web.
  • If their platform is unsupported, Direct the user to Firebase Docs to learn how to set up AI Logic for their application (share this link with the user https://firebase.google.com/docs/ai-logic/get-started)

Installation

The library is part of the standard Firebase Web SDK.

npm install -g firebase@latest

If you're in a firebase directory (with a firebase.json) the currently selected project will be marked with "current" using this command:

npx -y firebase-tools@latest projects:list

Ensure there's at least one app associated with the current project

npx -y firebase-tools@latest apps:list

Initialize AI logic SDK with the init command

npx -y firebase-tools@latest init # Choose AI logic

This will automatically enable the Gemini Developer API in the Firebase console.

More info in Firebase AI Logic Getting Started

Core Capabilities

Text-Only Generation

Multimodal (Text + Images/Audio/Video/PDF input)

Firebase AI Logic allows Gemini models to analyze image files directly from your app. This enables features like creating captions, answering questions about images, detecting objects, and categorizing images. Beyond images, Gemini can analyze other media types like audio, video, and PDFs by passing them as inline data with their MIME type. For files larger than 20 megabytes (which can cause HTTP 413 errors as inline data), store them in Cloud Storage for Firebase and pass their URLs to the Gemini Developer API.

Chat Session (Multi-turn)

Maintain history automatically using startChat.

Streaming Responses

To improve the user experience by showing partial results as they arrive (like a typing effect), use generateContentStream instead of generateContent for faster display of results.

Generate Images with Nano Banana

  • Start with Gemini for most use cases, and choose Imagen for specialized tasks where image quality and specific styles are critical. (Example: gemini-2.5-flash-image)
  • Requires an upgraded Blaze pay-as-you-go billing plan.

Search Grounding with the built in googleSearch tool

Supported Platforms and Frameworks

Supported Platforms and Frameworks include Kotlin and Java for Android, Swift for iOS, JavaScript for web apps, Dart for Flutter, and C Sharp for Unity.

Advanced Features

Structured Output (JSON)

Enforce a specific JSON schema for the response.

On-Device AI (Hybrid)

Hybrid on-device inference for web apps, where the Firebase Javascript SDK automatically checks for Gemini Nano's availability (after installation) and switches between on-device or cloud-hosted prompt execution. This requires specific steps to enable model usage in the Chrome browser, more info in the hybrid-on-device-inference documentation.

Security & Production

App Check

Recommended: The developer must enable Firebase App Check to prevent unauthorized clients from using their API quota. see App-check recaptcha enterprise.

Remote Config

Consider that you do not need to hardcode model names (e.g., gemini-flash-lite-latest). Use Firebase Remote Config to update model versions dynamically without deploying new client code. See Changing model names remotely

Initialization Code References

| Language, Framework, Platform | Gemini API provider | Context URL |

| :---- | :---- | :---- |

| Web Modular API | Gemini Developer API (Developer API) | firebase://docs/ai-logic/get-started |

Always use the most recent version of Gemini (gemini-flash-latest) unless another model is requested by the docs or the user. DO NOT USE gemini-1.5-flash

References

Web SDK code examples and usage patterns

0Installs
0Views

Supported Agents

Claude CodeCursorCodexGemini CLIAiderWindsurfOpenClaw

Details

License
MIT
Source
admin
Published
3/18/2026

Tags

Related Skills