The national attention drawn to guarding against the misuse of Artificial Intelligence only seems to be intensifying, and for a good reason. AI can be a useful tool to speed up and enhance processes for businesses, but in the wrong hands, it has the potential to cause major damage to infrastructure and can even help criminals bypass security measures. Connecticut’s Attorney General Tong has clearly taken notice, too, in his recently released memorandum on AI, stating:
“The development and use of artificial intelligence (AI) is expanding with breathtaking speed and reach across the world. While these tools present new opportunities… AI can exacerbate discrimination, bias, and abuse; spread disinformation; and otherwise influence decision-making that leads to poor outcomes…”
In July 2023, the Connecticut Data Privacy Act took effect, expanding enforcement activity related to data privacy and unfair trade practices, including cases involving algorithmic tools for businesses dealing with personal data. The state is now taking additional steps to lock down data privacy and security with a new memorandum on artificial intelligence.
On February 25, 2026, Connecticut Attorney General William Tong released a detailed memorandum. It provides guidance on how current laws apply to AI tools used by businesses, including those involved in employment, credit, insurance, and consumer-facing decisions.
The memorandum is not a separate law. It provides clarity for the appropriate use of AI and automated decision-making technology, and how it applies to certain pre-existing laws and regulations. The text also elaborates on why the memorandum is necessary, expressing that AI is capable of the following:
The memorandum is intended to help guide both consumers and businesses on their rights and responsibilities under Connecticut law. It notes that businesses rely on AI for more complex functions, including tenant screening, employment decisions, credit and loan determinations, insurance claims, and targeted advertising, making caution an absolute necessity.
The text is broken down into four main sections that the Office of the Attorney General will enforce as applicable to pre-existing state laws regarding the use of AI:
The first section explains that the Connecticut Attorney General has broad authority to enforce state and federal anti-discrimination laws and protect residents’ constitutional rights, including overseeing the collection, use, and protection of personal data and ensuring that automated systems do not enable discrimination. Importantly, it clarifies that some state and federal laws intended to eliminate discrimination and create equal opportunity directly apply to the use of AI and automated decision-making, meaning organizations should be wary of assuming that using AI tools will generally be a sufficient excuse to not follow state and federal guidelines. The Connecticut Office of the Attorney General fully intends to enforce consequences, when possible, to protect consumers.
This section reviews the intention of the CTDPA, granting consumers rights regarding personal data that has been collected, to:
Developers and businesses using AI are fully expected to follow all CTDPA guidelines and are expressly forbidden from discriminating against consumers for exercising their rights under the law. Additionally, AI developers, users, and integrators are required to implement and maintain data security safeguards to protect personal data that is used.
This section covers broader data security and breach notification laws, stating that pre-existing state laws for data breaches are not mutually exclusive from regulations and data breaches that are related to using AI. Businesses are still required to implement reasonable security measures to protect personal information and report certain data breaches in accordance with state requirements.
The final section specifically covers AI‑related conduct that falls under:
The Office of the Attorney General clarifies further that it remains committed to holding individuals and businesses accountable for violations of Connecticut law and harm caused using algorithms or evolving technologies. This also includes AI systems used in ways that mislead consumers, compromise data security, or result in discriminatory outcomes.
Although the 2026 memorandum on AI is not a law itself, it provides clarity and guidance on how AI applies to many federal laws, and emphasizes that using AI is not in any way an excuse to circumvent Connecticut state laws. As the use of AI evolves, state and federal regulators are increasingly focused on how it is used in employment and business contexts. For employers, this may include hiring and screening practices, performance evaluation, and workforce management.
Employers can find the full details and text of the memorandum here.
This content is for informational purposes only and shall not constitute legal opinion or advice. Consult your legal counsel to ensure compliance.