Start-ups must sow stringent ethical values to reap AI’s rewards

Ethical AI is quickly becoming a business imperative as well as a moral one
Artificial intelligence is often described as a garden: water it diligently, and it will flourish. It’s an apt metaphor, but as any green thumb will tell you, it’s not quite that simple. Careful pruning and continual weeding are crucial parts of the process.
The same can be said about AI, as Amazon’s failed attempt at a recruiting algorithm proved. The system inadvertently discriminated against female candidates, causing the company to miss out on top talent while damaging its brand. Feedback suggested that the problem was that developers had plenty of data (water), but they lacked the ethical framework (pruning) and balances (weeding) to keep poison ivy (negative outcomes) from sprouting.
It’s a stark example of why ethical AI is a business imperative. A Capgemini survey of 4,400 consumers found one-third of respondents would stop interacting with an unethical brand, while more than half said they would buy more from ethical ones. Founders must embed ethical AI policies within everything they do to cultivate the technology’s opportunities while avoiding its risks.
Build ethics from the ground up
A rubric for ethical decision-making helps to gauge the success of an AI’s predictions against the desired outcomes. It also builds awareness of AI ethics across the organization and incentivizes employees to identify and mitigate ethical risks.
It’s important to bring in ethical and legal expertise relatively early in the product development process. Many venture capital firms, incubators, and accelerators have advisors that can help. Some other helpful resources include the Algorithmic Impact Assessment and the Montreal Declaration for a Responsible Development of Artificial Intelligence.
“You have to have someone who is accountable for justifying the use of the prediction that comes out regardless of what it is,” says Natalie Cartwright, Co-Founder and COO of Finn AI, a start-up using AI-enabled chatbots in the financial sector. “Depending on the risk associated with that, it has to be more or less structured and sophisticated.”
Respect data privacy and security
Data equals value for AI systems, and that information is becoming a precious resource. Consumers are more aware than ever about the importance of safeguarding their personal data and care deeply about how their data is used.
In this context, start-ups must give users a reason to trust them to acquire the data needed to build robust AI systems.
“The more transparent you are, the more trustworthy you are,” says Helen Kontozopoulos, Co-Founder of ODAIA, a start-up using AI to improve customer journeys. “Always think about how you can develop more trust with your customers and develop trust around your AI. [It has to be] ethics-first, privacy-first, security-first.”
Integrating privacy-by-design can be a valuable way to win over users who are protective over the data they share. Europe’s General Data Protection Regulation (GDPR) does this explicitly by having users decide upfront what data collection they are comfortable with, but companies should also take care to collect only what their algorithms need to make accurate predictions.
“We have no interest in information that is at all sensitive. It’s an easy way to get compliance, but also to protect the customer,” says Cartwright. “If someone puts something into our virtual assistant that has a number that might disclose some personal information, that gets redacted immediately before it even makes it to our system.”
Adopt a lifecycle approach
An AI company’s ethical obligations don’t end once the product is released. Algorithms require quality checks to ensure they continue to function as intended. Start-ups should regularly evaluate AI performance using a mix of tools such as client surveys, focus groups, and testing protocols.
Bringing the customer into the process is a powerful way to build trust and transparency while also crowdsourcing insights that those within the organization might miss. For start-ups like Finn AI that build AI products for businesses rather than consumers, that means instilling the ethical values of their organization into their clients.
“We’re a B2B2C business, so we don’t get to dictate how they brand their virtual assistants,” says Cartwright. “But we advise them and share very clear data on what the implications are and coach them that this is a tool that should not be gendered.”
Upholding the highest ethical AI standards can be hard work, but the effort is well worth it. Start-ups that build trustworthy AI products will see their brand value bloom and their profit blossom.