Artificial intelligence (AI) is quickly becoming part of everyday work. From drafting emails to summarizing reports, these tools promise to save time and help teams work more efficiently. But for municipalities, the rise of AI also brings an important responsibility: protecting sensitive information.
Many AI platforms operate on public systems that learn from the data entered into them. In some cases, the information shared with these tools may be stored or used to improve the system. For cities and towns, that means confidential information should never be entered into public AI platforms.
Sensitive information may include personnel records, legal documents, citizen data, financial details, or information related to ongoing investigations. Even internal communications or draft policies could create risk if shared with an external AI system.
AI works best when it is treated as a productivity tool—not a secure storage location or decision-maker. While AI can generate helpful drafts or summaries, municipal staff should carefully review all outputs. These systems can produce errors, outdated information, or incomplete answers.
Municipalities may benefit from developing simple internal guidelines for AI use. Establishing clear expectations—such as limiting the type of information that can be entered into AI tools and requiring human review of AI-generated content—can help protect both the organization and the community it serves.
As AI technology continues to evolve, thoughtful use and strong privacy practices will allow municipalities to take advantage of innovation while safeguarding the information their communities trust them to protect.


