Increased deployment of Artificial Intelligence around the world has torn open a very public and heated debate. While AI is being used to do things like sentence criminals, determine who should be hired and fired, and assess what loan rate you should be offered, it’s also being leveraged to protect against poachingdetect illnesses sooner and more accurately, and shed new insights into fighting climate change. 

As we continue to develop AmandaAI here at AAHA Solutions, we increasingly involve ourselves in the field. And as the technology continues to advance, we will continue to take on more and more clients who want to incorporate AI into their software. Since we’re helping to create an AI-enabled future, we have a responsibility to explore what exactly that means. So what better way to strike up a conversation about AI ethics than over a plate of sushi at our weekly lunch and learn?

Before launching into a big group discussion about ethics, I presented some background material to make sure we were all working from a common foundation. What kind of AI problems are we talking about and what are we not talking about? What are some of the ways that Machine Learning and Big Data can be harmful despite seemingly good intentions? Regarding AI and ethics, what work is already being done—particularly here in Canada? What is Machine Learning, really? Oversimplified spoiler alert: it’s about using stats to calculate probabilities.

Why should we care about ethics?

Suppose a well-funded company approaches you to develop an AI project. They’re willing to pay you a considerable amount more than any client you’ve worked with in the past, but you’re hesitant about closing the deal because of a nagging thought in the back of your mind—does this feel right?  

What do you do? How do you reconcile profits with ethics? Where do you draw the line between what’s acceptable and what isn’t with AI projects? Are my standards for what’s OK the same as yours? Is it possible this project could be used to harm people or to discriminate against them? How can we prevent that?

You need to have an actionable and well-thought out response that defines your position as a company, and more importantly, your position as a group of human beings. Some kind of framework is needed that everyone in the company can point to and say “these are our expectations for an AI project, this is how we handle safety, privacy, and inclusion, and this is how we address the concerns of our stakeholders.”

Company values

Ultimately, it’s a company’s employees that bring a company’s values to life. High-minded, top-down decrees don’t count for much. So the hope is that by engaging in open, respectful, company-wide conversations about AI and ethics, the people who will help build your shared ethical framework are the same people who will be most responsible for implementing it. This also helps keep everyone accountable to one another, whether you’re a developer, designer, business development VP, HR specialist, CEO, or project manager.  

At AAHA Solutions, our vision is “to create software that impacts a billion lives.” Whether the impact we have on these lives is positive or negative rests on how we assess the ethical implications of the projects we pursue. Our workflow already includes rigorous processes for conducting business and technical due diligence for evaluating potential new projects—for AI work, ethical due diligence needs to be given equal billing.

Consumer trust

In case you’re not convinced that having a solid, ethical position regarding AI development is important—the bottom line is that it makes good business sense, too. A study conducted by the Capgemini Research Institute finds that three in five consumers place a higher trust in, are more loyal to, and spread positive word of mouth about companies whose AI interactions they perceive to be ethical. Ethics and profit aren’t mutually exclusive.

The discussion

Painting the scenario

After describing the ground rules and expectations for our all-hands lunch and learn conversation, I outlined a scenario based on work that other real-life companies are doing to  frame our discussion: 

Imagine there’s a company that wants to increase audience engagement in large venues—concerts, sporting events, etc. They want to use facial recognition to scan the audience and put people’s faces up on the venue’s big screen and apply silly Instagram-like filters (hats, sunglasses, hearts for eyes, etc.).

They also want to collect information about those in attendance: approximate age, gender, race, economic status, and “attractiveness” among others so they can offer their clients “audience analytics” upgrade packages. This would include tying whatever information they can get from facial recognition with an audience member’s seat location, ticket price, and any other data that was entered to purchase their ticket.

Finally, they want to tie the above information to any public social media accounts they can find for specific audience members, for subsequent engagement/marketing opportunities. 

Now imagine this company is super enthusiastic about working with us because of our expertise in facial recognition, they’re very well funded, and they want to know how soon we can start.

What follows is a summary of the main themes that I drew from AAHA Solutions first company-wide roundtable discussion on AI ethics. The point of the conversation wasn’t to create a polished AI ethics framework, but to start contemplating the inherent complexities of AI-related work and to think more deeply about finding the lines between what’s OK, what’s less OK, and what’s NOT OK. In the short time we had, we were able to pool our opinions and build the foundation for future discussions on the topic.

Consent

As long as informed consent is given and the company’s goals for collecting data are transparent, most people seemed to be comfortable with their photos being taken:

“As long as there’s consent on what’s being captured and what it’s being used for, and that the data is being stored in a way that’s responsible, then I’m okay with my photo being taken.”

“I’m personally not too worried about the company using facial recognition to gather information for internal analysis. What worries me is if the photos are made publically available and the possibility that the person sitting next to me at an event crops a photo with my face in it and shares it on social media without my consent. What would help is maybe sending a notification to ask for the permission of other people in the photo before sharing.” 

Some people were indifferent to having their photos taken because there’s already so much data about them floating around the Internet anyway. 

“To me, this is being done already, just without the face. You can go shopping anywhere, sign up for rewards cards, enter any type of store, and find out that they already have all your information anyways, from clothing size to purchase history. It’s not that different.”

Although specific opinions differed, the vast majority of the team agreed that they wouldn’t be 100% comfortable with taking on this project as described by the “client.” As we discussed the scenario in more detail, the conversation drifted to how the scope/intent of the project could potentially be changed to address end-user concerns while still delivering on the “client’s” goals. 

“In this event, you could have one gate that you enter if you want to share your information and one gate you enter if you don’t want to share your information.”

Could opting out have ramifications that seem punitive? If you opt out would you be offered worse seats?

“When this technology becomes widely adopted, if I don’t consent to having my picture shown, does that limit my ability to be in certain public spaces? Does it limit my ability to do certain things?”

Even if companies don’t make their photos and other data public, we don’t know how that information is being used internally.

“Even if someone successfully requests to have their face blurred in a public photo, that doesn’t mean they’re not in the photo for the company who’s still using that data.”

This segued us into the next theme: what is acceptable use and management of this data?

Responsible management and usage of data

A shared concern raised during our discussion was the profiling of ticket holders based on the data collected by the AI system.  

“Using facial recognition data to profile a target market can be problematic because, say a company bases seat prices on specific characteristics, whether they’re looking for a certain demographic and age group with a certain economic status or certain level of attractiveness. Say they only want ‘attractive’ people by the front boards. Now if this is tied to your profile, are you going to be restricted from access to certain things? The company can sell to someone with higher economic status who’s going to spend more money at the concession stand, or avoid selling to people based on their perceived appearance in a certain picture.”

This is why data management needs to be a central part of the conversation we have with our clients. We need to ask why they are collecting certain pieces of data and what their intent is, because, depending on the goal, there might be a better, more ethical way to get there. In fact, figuring out what our clients’ goals are and mapping out the best route to attain those goals is the entire point of discovery. Moving forward, we’re looking at ways to adapt our discovery process to cater specifically to AI projects.  

On the flip side of this argument, market profiling and segmentation is already a standard practice by many companies. In some contexts, AI is a tool that facilitates an already common practice. 

“I think we should set the technology aside when we think about this problem. People get sensitive when we talk about AI and privacy, but what if you just look at the same scenario and replace AI with a person sifting and segmenting photos based on perceived characteristics? It’s the exact same thing.”

In response… 

“Segmenting customers as ‘high’ or ‘low’ value, manually, one at a time, based on a staff member’s experience and intuition (and biases!), happens all the time as soon as we walk into stores. But does it change how we think about this when that ‘labeling’ is based on numbers in a mathematical model?

The general consensus in the room was that using AI to facilitate market segmentation can be useful, but keeping bias from seeping into outputs is clearly a very complex issue. Just because AI is based on math and data, that doesn’t mean it produces outputs that aren’t biased. 

“I think there’s an opportunity on this ‘project’ to pick and choose the data we look at when we make decisions about how to segment an audience at a venue. If we control the data that we’re using to make certain decisions about people, then we could minimize the bias compared to judgements a human would make.” 

“If we had a blockchain methodology where we could see where the data is being used, for instance in court cases where data should be unbiased, what if one of the flags is “race” and suddenly it’s a flag that adds 5 years to the jail sentence—”

“—it’s illegal to use race as a field in many parts of the world, to focus mathematical models on factors that people think are without bias. But even data that looks “unbiased” can carry bias. Something like a zip code might seem unbiased, but in parts of the word where there tends to be more segregated populations, that data can serve as a proxy for race. So the end results still have bias in them, even if it’s sometimes harder to detect.”

Although short, our conversation unearthed a lot of valid concerns and complexity that we need to be mindful of and keep discussing as we continue to develop AI applications. As a company, this conversation helped us come face-to-face with the fact that problems related to ethics and technology aren’t always as clear cut as they might first seem. This was only the start of our conversation, and we will continue to have the difficult conversations that need to happen in order to grow and develop ethically-guided policies regarding AI.

How do we move forward?

As a company, these are some of the next steps we will take to clarify our position regarding AI ethics. We encourage the entire tech community to get involved and join the conversation!

Recurring roundtable discussions

I really enjoyed facilitating this roundtable discussion and we plan on continuing the conversation about once a month. Our primary goal with these discussions is to explore the progress being made on AI ethics worldwide and fold those results into an actionable AI ethics framework for use here at AAHA Solutions. Ultimately, we’d love to collaborate with other tech companies to work toward a unified framework that benefits everyone.

A common pain point faced by the general public and companies trying to develop ethics guidelines is that existing approaches are often hard to read, ambiguous, and lengthy. In our discussion, we touched on the challenge of working toward a concise but comprehensive policy that’s accessible for a wide audience and memorable enough for a developer to keep in the back of their mind as they make decisions about how to implement given features on a project.

Dedicated Slack channel

The passion in the room was evident, and not all of us can wait a month to share our newest findings. To keep the conversation going, we set up an #ai-ethics Slack channel in our internal workspace to share links and exchange thoughts.

Education

The starting point of any meaningful discussion is education. Our AAHA Solutions family includes experts in design, development, project management, and business, but we’re not experts in topics such as ethics or law. To help us grow our understanding of these domains, we’d love to extend invitations to pundits in relevant fields who’d be willing to share their knowledge.      

If you’re interested in learning more about AI and ethics in your personal time, there are plenty of resources available to you. I’ve been doing a lot of learning on my own and wouldn’t hesitate to recommend these two books in particular:

  1. Prediction Machines: The Simple Economics of Artificial Intelligence
  2. Weapons of Math Destruction: How big data increases individuality and threatens democracy

Leave a Reply

Your email address will not be published. Required fields are marked *