Navigating AI data security risks: strategies for responsible innovation

AI data security is a top concern as artificial intelligence transforms industries like AECO, manufacturing, and media. Learn about risks, regulations, and proactive strategies companies use to ensure ethical AI development and safeguard sensitive data.

A woman analyzes lines of code displayed on large screens.

Shawn Radcliffe

October 17, 2024

min read
  • Artificial Intelligence (AI) is transforming businesses in many sectors, bringing concerns about data security and other potential pitfalls of the technology.

  • AI is changing rapidly. Both governmental and industry regulations are aiming to address these issues; organizations also need to be proactive about risk management around AI.

  • This process is about building trust with customers and other companies through consistent performance and reliability, and by being transparent about how AI is being developed and used responsibly.

Artificial intelligence (AI) is transforming many businesses, including those in the AECO, product design and manufacturing, and media and entertainment industries. But even as AI is enabling companies in these sectors to enhance how they work, this ever-advancing technology carries potential risks, which many business leaders are already aware of.

Data security issues involving AI are a key concern for companies, although business leaders may also be keeping an eye on other negative uses of this technology such as phishing and vishing, identity theft, and document fraud.

Such AI concerns can be stronger within certain sectors. One recent poll—the Yooz AI in the Workplace: 2024 Construction Industry Snapshot—found that the construction industry has higher levels of awareness (62%) around the harmful uses of AI, compared to 57% of overall respondents, which also include those in the manufacturing, automotive, retail, restaurant, health care, finance, and insurance industries.

Despite these concerns about AI, the survey shows 40% of construction industry respondents reported being very optimistic about this technology being used ethically. Compare this to 25% of respondents outside the industry who said the same. Additionally, in Autodesk’s 2024 State of Design & Make report, 78% of respondents from Design and Make industries agreed that AI will enhance their industry.

Allaying concerns about AI use

To allay fears about the negative uses of AI, some companies are taking steps on their own to ensure that they use this technology responsibly. Governments are also prodding companies in that direction; in July 2023 the Biden administration in the US secured voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI.

Autodesk approaches this through its Trusted AI program, a team of legal experts, privacy and governance specialists, information security professionals, and AI and data experts focused on safely implementing AI through adherence to strong internal standards and policies. This includes collaborating with the US National Institute of Standards and Technology (NIST) as part of the US AI Safety Institute Consortium (AISIC) on developing guidelines and standards for AI measurement and policy, as well as seeking feedback from customers about their concerns related to AI.

Barry Scannell, a partner in law firm William Fry’s Technology Group who specializes in AI, says the European Union’s recently implemented Artificial Intelligence Act (AI Act) provides a good template for what companies should be doing, even if they are not subject to that regulation.

“For companies that are developing AI and thinking about using AI, using a risk-management framework is an important first step.”

—Aaron Cooper, VP of global policy, BSA | The Software Alliance

With AI systems changing rapidly and new regulations being passed in different jurisdictions, organizations need to be proactive when it comes to addressing concerns about the potential harms of this technology. “For companies that are developing AI and thinking about using AI, using a risk-management framework is an important first step,” says Aaron Cooper, a senior vice president of global policy at BSA | The Software Alliance, a public policy advocacy association for the software industry.

“One of the things that BSA has done is put out recommendations to C-suite officers to think about making sure that when they incorporate AI, they’re doing it responsibly,” Cooper says. “That kind of step is important regardless of whether there’s regulation that requires it.”

This effort is needed to ensure that a company uses AI ethically, says Cooper, and also to avoid violating other laws or damaging the company’s credibility if, for some reason, a negative occurrence related to AI ends up on the front page of a newspaper. “So, putting some guardrails in place at the outset is important,” he says.

Carrying out inventory of AI systems

A man uses a tablet to monitor data in a server room.
Security strategies include encryption, anonymizing, and filtering and controlling data.

Scannell also recommends that companies do an inventory of their AI systems—what are their intended purposes, who are they going to affect, and what are the potential risks and misuses. “This should also include a fundamental rights impact assessment,” he says, “and a data protection impact assessment.”

The bigger challenge for organizations, though, is not the AI impact assessment, he says. “The challenge is actually organizational, particularly for larger organizations, because it requires a very large pool of stakeholder involvement.”

In terms of data protection, the process for AI is the same as when implementing other kinds of technology. There is a strong need for encrypting or anonymizing data, filtering out data that is protected, and ensuring that there are rigorous access controls.

And because AI is another kind of software code, it should be assessed for vulnerabilities that could be exploited by a threat actor. The same line of thinking should also be applied to vendors involved in the development or use of a company’s AI products, with similar risk-based questions asked of these third parties.

This entire process is about building trust with customers and other companies, which can only happen through consistent performance and reliability, and by being as transparent as possible about how AI is being used responsibly.

Cooper sees this trust as a key aspect of ensuring that “AI is something that companies in all different industry sectors feel comfortable using and taking advantage of.”

“Because if you have companies that are not developing AI responsibly,” he says, “that puts the next player in the value chain—whether it’s the end user of a system or somewhere in between—in a really bad spot.”

This is the goal of the EU’s AI Act and similar efforts underway in the United States, Cooper says—“to help create an environment that allows AI to flourish and encourages innovation, and at the same time, makes sure that as AI is being developed; it’s being done in a responsible manner.”

Regulating ethical use of AI

In late September 2024, more than 100 companies (including Autodesk) signed onto the EU AI Pact, becoming the first companies to pledge to apply the principles of the EU AI Act. Yet even with voluntary efforts by companies to show that they are developing and deploying AI responsibly, many people still want governments to regulate this technology, an AuthorityHacker survey found.

In this survey of US residents, 79.8% believe the government should implement strict AI regulations—even if it slows down technological innovation. A key concern among respondents was privacy, with 82.45% concerned about the use of personal data to train AI systems.

An analysis by the same company also found that nearly two-thirds of the world’s countries are working on regulating AI, with varying levels of progress. Among the leaders in this area is the European Union with its AI Act, which was formally adopted by the European Council on May 21, 2024, and will take effect in phases over the next three years.

The AI Act aims to regulate artificial intelligence by categorizing AI systems based on their risk levels and setting specific requirements for each category. AI systems with limited risk, such as spam filters or AI-enabled video games, would be subject to very light transparency requirements. High-risk AI systems would have a stricter set of requirements to gain access to the EU market; this includes AI-based medical systems or AI systems used for hiring. Certain AI systems deemed to have unacceptable risk—such as those that allow “social scoring” by governments or companies—will be banned.

This approach is quite sensible, says Scannell. “There’s a reason why the prohibited aspects of AI are prohibited—they’re shady,” he says. In addition, “if you look at the AI Act, a lot of the rules are what stakeholders, customers, users, and investors would want a big company to be doing when they’re using AI.”

Overall, Scannell thinks the AI Act’s risk-based approach is “the right way to go, because the more prescriptive you try to be, the more out-of-date it becomes.” Instead, by focusing on high-risk uses of AI, the law continues to be relevant even as the technology changes. “[Generative AI] technology didn’t exist when Europe started looking at regulating AI,” he says.

In terms of data protection, Scannell says the AI Act and the European Union’s earlier General Data Protection Regulation (GDPR) are complementary pieces of legislation. For example, if a company is processing large amounts of personal data for AI uses, the GDPR requires it to carry out a data protection impact assessment. So in the EU’s AI Act, “there are a lot of references back to the GDPR,” he says.

Cooper agrees that the EU’s AI Act, in general, takes the right approach by focusing on high-risk use cases of AI. However, “there are going to be a lot of questions about how [the act] gets implemented,” he says. In particular, he would have liked to see more clarity about the kinds of things that developers and deployers are specifically responsible for.

Regulating AI in the United States

California state flag waving near the Capitol building dome.
California is one state taking the lead in regulating AI systems.

As of now, the United States does not have a countrywide regulation similar to the EU’s AI Act. However, in October 2023, the Biden administration issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which is intended to ensure that AI systems are safe and being used responsibly. Cooper says this order “was designed to make sure the administration was ready, in a whole-government approach, to address AI issues.”

While the executive order is stronger than voluntary commitments by companies, its effectiveness will depend on the Department of Commerce, the Department of Homeland Security, and other agencies creating standards and regulations for the oversight of AI.

Special task forces in the US House and Senate are also starting to explore how to tackle AI issues from a legislative perspective, with some bipartisan efforts, says Cooper, but “where we’ve seen the most movement toward the creation of new AI laws is in the US states.” Both California and Colorado have taken the lead in regulating the development and deployment of certain AI systems.

“One of the things that is similar to what the EU has done and what has happened so far in the US, including in places like Colorado, is that the primary focus of AI regulation has been on high-risk use cases,” says Cooper.

“This makes sense, because with those situations, we can easily identify what harms might happen in the world when using AI and encourage—or require—companies to take steps to mitigate those harms.”

For high-risk cases, “it’s not that AI is not appropriate to use in those situations,” says Cooper, “but we’d want to make sure, from a public policy perspective, that companies that are training or using AI for those situations take extra precautions to make sure AI is not being trained or used in a way that’s going to have discriminatory results.”

Ensuring interoperability across countries

One main concern for companies that deploy or provide AI software, such as Autodesk, is that each country—or even each US state—may end up adopting widely different laws regulating the use of AI. This could require companies operating internationally to deal with multiple regulations, which could hinder innovation or force business leaders to prioritize meeting the various requirements. This happened to some extent with data protection laws, with the United States taking a different approach than the EU’s GDPR.

Ideally, “we want to see rules that are interoperable among like-minded nations,” says Cooper. So far, he thinks the outlook for interoperability is good, with G7 countries—France, Germany, Italy, Japan, the United States, the United Kingdom, and Canada—having conversations about these issues, “to try to make sure that there are consistent, if not identical, approaches to regulating AI,” he says.

Of course, this could change when a new administration takes office in the United States or the European Union, or if US states decide to take new approaches to tackling AI, Cooper says. However, “in general, governments have been focused on high-risk use cases—regulating and creating guardrails around those,” he says. “I think we’ll continue to see the same approach, at least for a while.”

Pratyush Rai, CEO of Merlin AI, a generative AI browser extension, believes that the EU AI Act, while thorough, may overwhelm smaller startups with its complexity. “Regulations that are clear, consistent, and not overly burdensome will allow companies like ours to focus on innovation while ensuring the ethical and secure use of AI,” he says. “On our end, we can engage in transparency, self-regulation, and collaboration with regulators to shape practical and effective rules. Establishing industry standards or certifications could also help businesses comply more easily without getting bogged down in complexity.”

“Regulations that are clear, consistent, and not overly burdensome will allow companies like ours to focus on innovation while ensuring the ethical and secure use of AI.”

—Pratyush Rai, CEO, Merlin AI

Scannell points to how the GDPR data protection law was implemented in different countries as a sign of how companies may handle having to meet AI regulations in different jurisdictions.

“Something we saw with the GDPR is that, as opposed to going with the lowest common denominator, companies did the opposite,” he says—international organizations that were subject to different data protection laws chose to meet the highest threshold. “That means we’ll catch every regulation within it,” he says. “And of course, the highest threshold was always the GDPR.”

All these issues are being tackled within the context of generative AI, which are not yet systems that can think on their own. But Scannell expects it is just a matter of time before more autonomous systems are available, offering new potentials, but also possible harms.

“This technology really did need to be regulated, because without regulation, it could have really nefarious uses and negative impacts,” says Scannell. “Now we have a framework where we can introduce really startlingly new technology sensibly and safely.”

Shawn Radcliffe

About Shawn Radcliffe

Shawn Radcliffe is an Ontario, Canada–based freelance journalist and yoga teacher, specializing in writing stories about health, medicine, science, architecture, engineering, and construction, as well as yoga and meditation. Reach him at ShawnRadcliffe.com.

Recommended for you