The Gai Way: Data Privacy & Security Governance

Part 1: Insight — Why did we build Gai?

As average users, we often face a "dilemma" when using AI:

  1. Local Model (Running AI Model on your own PC): While data never leaves your sight, it is often as slow as a snail and has outdated knowledge.
  2. Cloud Model (Using Big Tech AI Model): While smart and fast, it carries the "Data Training Risk" (privacy used to train models) and the "Trust Black Box" (you can’t verify if providers actually delete data or perform manual audits).

Our Conclusion: We can borrow computing power from big platforms, but the "power of life and death" over data must remain in the hands of the user.

Part 2: Privacy Strategy — Giving "Choice" Back to You

  • Bring Your Own Key (BYOK) & Auditing: Gai supports using your own API keys. We intentionally omit SSL Pinning. We encourage tech-savvy users to audit the traffic via packet capture; you will see that data goes directly to the provider. We never "peek" in the middle.
  • Minimum Information Entropy (On-Demand Input): Many apps/plugins demand to scan your entire hard drive—this is dangerous. Gai adheres to the principle of "If you don't give it, it doesn't see it." It only processes the specific fragments you manually input, preventing cloud AI from piecing together your complete professional or personal map.
  • Localized Logs (No Cloud Sync): Your chat history is never uploaded to the cloud and never forcibly synced. Logs stay only on your local device. You can manage or physically delete them like private documents. "Your data, your rules."

Part 3: Security Whitepaper — A Good Tool Doesn't Stare at Your Keys

1. System-Level Sandbox: Drawing a "Boundary" for AI

Many apps request "Full Disk Access" or "Administrator" privileges for convenience. Gai strictly follows Native System Sandbox standards:

  • Natural Isolation: The sandbox acts as a wall. Gai is contained inside, unable to access private photos, bank accounts, or system files outside the wall.
  • Explicit Consent for Export: Any file generated by AI is locked inside the sandbox. Unless you manually click "Export," it cannot save so much as a single image to your desktop.

2. Autonomous Connector: No "Trojan Horses"

Popular MCP protocols or Skills/Plugins are often "functional black boxes" provided by third parties. You cannot be sure if they contain backdoors.

  • From "Third-Party Plugins" to "Self-Built Connection": We do not provide ready-made, risky skill packages. Instead, we provide the open-source PyGai Connector.
  • The Difference: A typical plugin is like hiring a stranger to clean your house while you are away; PyGai is a megaphone you assembled yourself. The code is transparent. AI can only execute the logic that you personally opened and wrote.

3. Traceable "Thinking Process"

  • Gai insists on Transparent Error Reporting. Feedback from the API is passed through exactly as it is, without "beautification." Furthermore, every request parameter and security setting is recorded in a local audit log. This "process evidence" ensures every AI action is traceable.

Conclusion: Learning to "Drive" AI Safely

Gai does not strive for "hands-off" pseudo-automation. We pursue real assistance that is Visible, Controllable, and Understandable. Through the "Hard Isolation" of the system sandbox and the "Soft Logic" of the autonomous connector, we have put the AI in a cage.

In the world of Gai, AI is a lion in a cage, and the leash is always in your hand.